Test Driven Development

Written By: Jeremy Miller


 

Test Driven Development (TDD) is a development practice where developers author code by first describing the intended functionality in small, automated tests, then writing the necessary code to make that test pass. TDD came out of the  Extreme Programming (XP) process and movement in the late 90’s and early 00’s that sought to maximize rapid feedback mechanisms in the software development process.

The usage and effectiveness of Test Driven Development is extremely controversial. With just a bit of googling you’ll find both passionate advocates and equally passionate detractors. While I will not dispute that some folks will have had negative experiences or impressions of TDD, I still recommend using TDD. Moreover, we use TDD as a standard practice on our Calavista client engagements and I do as well in my personal open source development work.

As many folks have noted over the years, the word “Test” might be an unfortunate term because TDD at heart is a software design technique. I would urge you to approach TDD as a way to write better code and also as a way to continue to make your code better over time through refactoring (as I’ll discuss below).

Succeeding in software development is often a matter of having effective feedback mechanisms to let the team know what is and is not working. When used effectively, TDD can be very beneficial inside of a team’s larger software process first as a very rapid feedback cycle. Using TDD, developers continuously flow between testing and coding and get constant feedback about how their code is behaving as they work. It’s always valuable to start any task with the end in mind, and a TDD workflow makes a developer think about what successful completion of any coding task is before they implement that code.

Done well with adequately fine-grained tests, TDD can drastically reduce the amount of time developers have to spend debugging code. So yes, it can be time consuming to write all those unit tests but spending a lot of time hunting around in a debugger trying to troubleshoot code defects is pretty time consuming as well. In my experience, I’ve been better off writing unit tests against individual bits of a complex feature first before trying to troubleshoot problems in the entire subsystem.

Secondly, TDD is not efficient or effective without the type of code modularity that is also frequently helpful for code maintainability in general. Because of that, TDD is a forcing function to make developers focus and think through the modularity of their code upfront. Code that is modular provides developers more opportunities to constantly shift between writing focused unit tests and the code necessary to make those new tests pass. Code that isn’t modular will be very evident to a developer because it causes significant friction in their TDD workflow. At a bare minimum, adopting TDD should at least spur developers to closely consider decoupling business logic, rules, and workflow from infrastructural concerns like databases or web servers that are intrinsically harder to work with in automated unit tests. More on this in a later post on Domain Driven Development.

Lastly, when combined with the process of refactoring, TDD allows developers to incrementally evolve their code and learn as they go by creating a safety net of quickly running tests that preserve the intended functionality. This is important, because it’s just not always obvious upfront what the best way is to code a feature. Even if you really could code a feature with a perfect structure the first time through, there’s inevitably going to be some kind of requirements change or performance need that sooner or later will force you to change the structure of that “perfect” code.

Even if you do know the “perfect” way to structure the code, maybe you decide to use a simpler, but less performant way to code a feature in order to deliver that all important Minimum Viable Product (MVP) release. In the longer term, you may need to change your system’s original, simple internals to increase the performance and scalability. Having used TDD upfront, you might be able to do that optimization work with much less risk of introducing regression defects when backed up by the kind of fine-grained automated test coverage that TDD leaves behind. Moreover, the emphasis that TDD forces you to have on code modularity may also be beneficial in code optimization by allowing you to focus on discrete parts of the code.

Too much, or the wrong sort of modularity can of course be a complete disaster for performance, so don’t think that I’m trying to say that modularity is any kind of silver bullet.

As a design technique, TDD is mostly focused on fine grained details of the code and is complementary to other software design tools or techniques. By no means would TDD ever be the only software design technique or tool you’d use on a non-trivial software project. I’ve written a great deal about designing with and for testability over the years myself, but if you’re interested in learning more about strategies for designing testable code, I highly recommend Jim Shore’s “Testing Without Mocks” paper for a good start.

To clear up a common misconception, TDD is a continuous workflow, meaning that developers would be constantly switching between writing a single or just a few tests and writing the “real” code. TDD does not — or at least should not — mean that you have to specify all possible tests first, then write all the code. Combined with refactoring, TDD should help developers learn about and think through the code as they’re writing code.

So now let’s talk about the problems with TDD and the barriers that keep many developers and development teams from adopting or succeeding with TDD:

1. There can be a steep learning curve. Unit testing tools aren’t particularly hard to learn, but developers must be very mindful about how their code is going to be structured and organized to really make TDD work.

2. TDD requires a fair amount of discipline in your moment-to-moment approach, and it’s very easy to lose that under schedule pressure — and developers are pretty much always under some sort of schedule pressure.

3. The requirement for modularity in code can be problematic for some otherwise effective developers who aren’t used to coding in a series of discrete steps.

4. A common trap for development teams is writing the unit tests in such a way that the tests are tightly coupled to the implementation of the code. Unit testing that relies too heavily on mock objects is a common culprit behind this problem. In this all-too-common case, you’ll hear developers complain that the tests break too easily when they try to change the code. In that case, the tests are possibly doing more harm than good.

5. Some development technologies or languages aren’t conducive to a TDD workflow. I purposely choose programming tools, libraries, and techniques with TDD usage in mind, but we rarely have complete control over our development environment.

You might ask, what about test coverage metrics? I’m personally not that concerned about test coverage numbers, don’t have any magic number you need to hit, and I think it’s very subjective anyway based on what kind of technology or code you’re writing anyway. My main thought about test coverage metrics are only somewhat informative in that the metrics can only tell you when you may have problems, but can never tell you that the actual test coverage is effective in any way. That being said, it’s relatively easy with the current development tooling to collect and publish test coverage metrics in your Continuous Integration builds, so there’s no reason not to track code coverage. In the end I think it’s more important for the development team to internalize the discipline to have effective test coverage on each and every push to source control than it is to have some kind of automated watchdog yelling at them. Lastly, as with all metrics, test coverage numbers are useless if the development team is knowingly gaming the test coverage numbers with worthless tests.

Does TDD have to be practiced in its pure “test first” form? Is it really any better than just writing the tests later? I wouldn’t say that you absolutely have to always do pure TDD. I frequently rough in code first, then when I have a clear idea of what I’m going to do, write the tests immediately after. The issue with a “test after” approach is that the test coverage is rarely as good as you’d get from a test-first approach, and you don’t get as much of the design benefits of TDD. Without some thought about how code is going to be tested upfront, my experience over the years is that you’ll often see much less modularity and worse code structure. For teams new to TDD I’d advise trying to work “pure” test first for a while, and then start to relax that standard later.

At the end of this, do I still believe in TDD after years of using it and years of development community backlash? I do, yes. My experience has been that code written in a TDD style is generally better structured and the codebase is more likely to be maintainable over time. I’ve also used TDD long enough to be well past the admittedly rough learning curve.

My personal approach has changed quite a bit over the years of course, with the biggest change being much more reliance on intermediate level integration tests and deemphasizing mock or stub objects, but that’s a longer conversation.

Share on Facebook
Share on Twitter
Share on LinkedIn