In this post I will attempt to define TDD and BDD and show how they are similar. I’ll also describe the way that each apply to software development and where I’ve seen shortcomings with how TDD is often applied.
TDD - Test Driven Development/Design
It depends on the age of the information you’re looking at which of the two terms you’ll see used – Development vs. Design. Newer literature will refer to it as Design more often than not. The reason for the change is to emphasize the important software engineering practice that is primarily benefited from its practice.
Can you describe TDD in a sentence?
TDD is the practice of using automated unit tests to drive the creation of classes and methods that satisfy a set of requirements for a software application.
On a team, who does TDD?
TDD is performed by the software developers that are responsible for the delivery of the code that satisfies the requirements of the application. It is not performed by QA or others responsible for quality control of an application.
Why are developers writing these tests? Shouldn’t testers write tests?
Because the primary purpose for TDD isn’t testing per se. It’s strange that it’s called Test Driven Design when its primary purpose isn’t testing. Perhaps it’s a matter of semantics, but in my opinion, the simple unit tests that are created as a part of this process are a byproduct of the effort. The real output from this process is a design that embodies two very important traits: cohesiveness and loose coupling. For information on SOLID design principles, please see: http://www.butunclebob.com/ArticleS.UncleBob.PrinciplesOfOod. This book has an excellent coverage of these principles.
Cohesiveness refers to the degree to which a class’s interface is well organized into single set of related responsibilities. This refers to the ‘S’ and the ‘I’ in the SOLID principles that are widely accepted as good Object-Oriented design principles.
Loosely coupled refers to the degree to which classes that work together to solve a problem are concretely knowledgeable of each other. In other words, how abstract are the relationships. More loosely coupled is desirable. This refers to the ‘L’ and ‘D’ and a bit of ‘O’ of the SOLID principles.
How does TDD fit into a SDLC?
In my opinion, before writing the first TDD test and subsequently any code, developers come together and through a general understanding of the high-level functional and non-functional requirements make a determination of the high-level architecture. They should be able to do this based on past experience with applications that are similar in nature. This decision process includes deciding what technologies will be used (WinForms vs. WPF vs. Mobile; SQL Server; etc.) and what high-level layers exist in the application (UI, Business Logic, Data Access). They should also generally have some ideas about how these layers will interact. For instance, you might determine that because you must support desktop clients, web clients, and automated processing that you wish to separate business logic into a separate layer and make it accessible via a web service. The latter might be skipped to begin with, but I think it’s helpful to form these mental maps early on.
Once the developers have this high-level roadmap, it becomes easier to proceed with work in specific areas, but the next part can be somewhat tricky. It is at this point that the team needs to figure out where to start working. Most TDD examples are very simplistic and in my opinion don’t help much in making this decision. Often, a developer will decide, “I know how I want the database to look so I’ll do the data model and procs and then code up my data access objects.” If that’s where they wish to start, they would create a test fixture for the DAO and then create tests that would drive the new methods that the DAO must provide to satisfy the test. As each test is written, the DAO has logic added to it to satisfy the new test. These tests are often very, very small and simple (that’s a good thing). The developer may try to “guess” all the corner cases that a consumer would hit against the DAO and would create a lot of tests to validate those and subsequently put code into the DAO to handle the cases. As you can imagine, the number of these small unit tests will be many. That’s OK since unit tests are supposed to be fast, atomic, and repeatable. It’s not painful to rerun them many, many times. The end result is that you have a DAO that has a well-defined public API for getting your data and you have a unit test fixture that exercises that logic based on “rules” that the developer established that the DAO was supposed to adhere to.
The above pattern would then repeat itself at the web service layer, the business logic layer, and potentially up into the UI layer. All during this time, you make numerous “assumptions” about how the code will be used by a consumer. As you move higher and higher through the layers of your application, you may be required to tweak classes in lower layers, but that’s OK. You have the benefit of unit tests to help you if your assumptions are wrong and you have to go back and refactor. Each change is supported by a compilation and execution of tests to ensure you didn’t break something along the way. This entire process should be iterative, fast, and focused on small sets of functionality.
However, the above scenario is where I’ve seen TDD fail. Why? Because the view is that the reiteration of changes as you move up the layers is seen to be expensive. It can be costly to go back and touch several layers lower down in the design. You often have to: 1) Identify a missing or incorrect assumption; 2) Identify where to make the change; 3) Change the test(s); 4) Change the code; 5) Recompile; 6) Rerun the tests. If your requirements are constantly in flux or you make lots of assumptions that are wrong, this cycle can be expensive.
Enter BDD…
BDD
Behavior Driven Development/Design
Similar to TDD, the emphasis in this practice is on Design not development. In contrast to TDD, where you often see examples or real-world projects that begin at the bottom of the logical application stack, BDD focuses on “Behavior” aka what the user experiences and sees from the application at the logical “top” of the stack. That’s why I refer to it as a top-down approach while I refer to TDD as a bottom-up approach.
Not surprisingly, it shares a TDD principles: 1) use tests to drive design; 2) use small, fast iterations to identify a requirement, codify that in a test, implement code to make the test pass, repeat. Why is it similar? Because BDD evolved from TDD. I believe many practitioners of TDD realized that the most important aspect of the apps we write is what the user experiences. As a result, the emphasis shifted from the bottom of the stack to the top and BDD was born.
So rather than starting with writing tests for a DAO, you would start with writing tests for a UI-related class that defines the behavior the user wishes to have. Depending on some of the prevalent design patterns for the UI technology you chose, you may have in mind various design patterns that are popular to use with that technology. For instance, Model-View-Controller (MVC) is very popular with web technologies, Model-View-View-Model (MVVM) is popular with WPF, and Model-View-Presenter is often applied for WinForms. Strictly speaking, when you first begin writing your UI tests and supporting code, it’s too early to choose one design pattern over another, but I believe it’s useful to have them in mind as you
code to see where the design is leading you. But most importantly, let the tests and the resulting behavior drive which pattern is the winner.
So why is BDD better than TDD?
Well, I believe that they are very closely related and in reality BDD is nothing more than a refinement in how to approach TDD. In my opinion, BDD is superior because it focuses on the most important aspects of the system you’re building – the thing the user experiences. Secondly, while I don’t have empirical evidence at hand to prove this, I believe it reduces the churn that is often experienced when you build systems from the bottom up. As long as you design and code in small chunks and focus on YAGNI (you ain’t gonna need it) and let the top-level classes drive the requirements for the lower-level classes then the amount of churn is less. Most importantly, you don’t need to make ASSUMPTIONS about what is needed. Your top-level classes as consumers TELL the lower-level classes what is needed and you code that and nothing else. YAGNI wins.
Where do Business Analysts fit into this process?
The emergence of DSL (Domain Specific Language) and tooling to support them have opened a number of possibilities. It allows the Business Analyst (or subject matter expert SME) to define the specifications for how an application should behave in a language they are familiar with. The tooling then “translates” that to executable code and with the help of the developer the required BDD unit tests are born thus driving the development of the app. This is often referred to as “executable specifications”.
One example of a tool like this is SpecFlow. This tool is a .NET tool that supports creating executable specs by integrating with numerous supported unit test frameworks like NUnit, xUnit, etc. The SME can write specifications in English sentences that conform to a predetermined structure that outlines pre- and post-conditions for each “test” and the tool converts those into actual unit tests, which the developer then implements.
@bradwilson of Microsoft did a demo of this tool at #agile2010 and showed how he was able to integrate the resulting test fixtures with a web-automation tool to drive the web UI of a project. The end result was a set of specifications written in English backed by executable tests written in the developers language that drove the creation of the web UI the end user wanted.
I hope this has helped to clarify the differences between TDD, BDD, and how it might apply to your development initiatives.
/imapcgeek
No comments:
Post a Comment