X
Contact Us
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Test Driven Development (TDD) is one of the practices of Extreme Programming (XP), developed in the mid-90’s. At the time, proponents encouraged people to adopt all of the XP practices on the assumption that they mitigate the weaknesses introduced by one practice and reinforce the strengths of the other practices. For example, refactoring is risky without TDD, and TDD can make the code base difficult to work with as a whole without a system metaphor. As a result, there are limitations to using test driven development by itself.
However, as other agile methodologies became widespread, TDD became accepted apart from XP. It has become, if not a “best practice,” at least good practice. However, test driven development has limitations.
At its simplest, TDD can be described by the procedure “Red, Green, Refactor.” In the “Test Driven Development” book, the cycle is described in slightly more detail:
This is fine for the minute-to-minute implementation of TDD. The TDD book also describes a general procedure for figuring out what tests to add:
TDD is one of the tools we can use to solve problems. However, it’s not going to solve our problems for us or write code that works flawlessly the first time, every time. There are numerous benefits to having tests, and some benefits specifically to writing tests first, but there are going to be times when TDD is inappropriate. At times, it will either take longer than other methodologies or may lead you down the wrong path. Here are a few examples of when not to use TDD as your only tool.
When your code has to interact with third-party code or with the outside world (hardware, network, database, etc), TDD may not be your best choice. When you write tests that involve interactions with external entities, you generally have to use a mocking framework (or some other test double) to stand in for those external entities in your test.
If you fully understand and can duplicate how those external interfaces work, you can write accurate tests for the code that uses those interfaces. But most of the time, we don’t get the interactions right the first time. Perhaps the network call is non-blocking, and we didn’t realize it, so we wrote our test and our mock assuming that the call was blocking and data would be available immediately. Now our tests pass but the system fails in the real world.
One way to resolve this difficulty is to write a “spike solution”. A spike is intended from the outset to be experimental or learning code, not part of the production solution. It generally does not have automated tests. The intent of a spike could be to learn how some technology works (at which point maybe you can effectively mock it and TDD your other code), or to prove that something is possible at all.
Another option is to wrap the third-party service with an API you do control. In this case, you can mock the API for all of your other code, but you need to ensure your adapter works. You can then write a (relatively) simple automated test that exercises the real external interface. This test may not run automatically in your CI (perhaps it needs access to hardware, or hits some third-party API that has some per-request cost), but could run from time to time.
Another common “failure mode” of TDD occurs when you find subtle but pervasive duplication throughout your code base and want to refactor large chunks of code. First, this large-scale refactoring is unlikely to fit into the schedule. Clients want features that deliver value, not nice code that does the same thing it did two weeks ago.
Large-scale refactoring by definition will change tests as well as code. If you’re changing tests and the code at the same time, the safety net of the tests is substantially less useful. The process has to become:
The difficulty here lies in keeping a working system at all times. If you don’t need a working system, you may be able to move faster by moving directly from point A to point Z. If you must keep your system working at all times (which is at the heart of agile development anyway), you may need to morph your system in unexpected and possibly time-consuming ways to get it to keep working at points B, C, etc. So jumping straight to point Z may be faster overall, even though your system will be broken for a while. Of course, you may not be able to put humpty dumpty back together again. The longer you go without a working product, the harder it is to be confident that it’s working correctly when you do finish.
Overall, we wholeheartedly believe in TDD in the right situations. TDD works best when writing a pure logic function – when inputs and outputs are clearly defined. It also shines when interactions between objects is clearly defined (or needs to be defined, for interactions where you control both sides). And do not forget the last step of the process: refactor! Writing tests and working code may get your product shipped today, but without refactoring, your code base is liable to becomes as crufty as any old legacy code. Leverage your tests to keep your system well-maintained so you can deliver value today and tomorrow. You can always reach out for some help at getstarted@219design.com.
Date published: 03/07/2022