No, I did not get a stroke while writing this headline, and yes you read that right. In today’s post, I shall write about development-driven testing. Most engineers will tell you all about test-driven development and how the tests should always come before the implementation. However, sometimes, it is more beneficial to write an implementation before its tests.
Say, you’re discovering a new library. This library has a ton of documentation to read about to grasp how it should work. At best, the library and documentation are simple and clear. Simple enough that you can just pick the right abstractions and the right methods from the get-go. In these best-case scenarios, you can write your tests first from the well-documented implementation.
However, that is not always the case. Sometimes, the documentation offered is missing or cryptic to say the list. These are the times were writing the code before the tests can save time. Generally, this happens when integrating with 3rd party software. Hopefully, when trying to connect two of your own components, you have a clearly defined contract between these.
Back to the cryptic API scenario, you for a piece of code that will have the behaviour you need in your system. I got bitten quite a few times by unclear/missing documentation when integrating with third-party systems. Sometimes I’d go through documentation not defining exception cases or behaving differently than defined. This tends to waste a lot of time when writing tests first. Following pure TDD, you write your tests and then your software to pass the tests. But if the tests relied on unreliable documentation, you get a terrible surprise when running your application for the first time. Your tests are useless and the application does not work. While it does not happen most of the time, I’d rather avoid this. Over time I learned I can avoid this by proving the piece of borrowed software works as it should.
The best way to do so is by exploring, or more precisely experimenting. Write your code and tweak it until it reaches the behaviour you need. Once done, and this is where we enter development-driven testing, you write a test. This way, you ensure that future code changes will not break this logic you introduced.
This is where you need to be quite careful. Writing tests after the code brings in some bias. Bias from you knowing what the code does and/or should do. You may think, oh I don’t need to test that one conditional, I know it works fine. No, write that test. Write tests against all branches of the method you wrote. This is the only way to ensure the code will always do exactly what you mean it to do. Writing simple code, the greater the complexity, the more tests you will need. The more tests you need, the less you feel like writing them. The less you feel like writing them, the fewer tests you will write.
All in all, development-driven testing can be tricky but only if we are careless when doing it. Also, you need to choose when to use it. My preference is TDD when working with internal code and DDT when working with unknown code, external components and libraries. Next time you work with some external code, try to play with it before writing dozens of tests. Development-driven testing isn’t as evil as some paint it to be.
In the same way that people should try TDD and tune their use to what works for them, they should try DDT and tune their use of it. No, I did not come up with that one but Uncle Bob did.
programmers should try TDD and then tune their use of it to what works for them
Robert C. “uncle Bob” Martin, Is TDD Dead? Final Thoughts about Teams., 2014
Thank you for reading another one of my posts. I’ve just realised that I’ve been publishing blogs monthly for the past twenty-six months. Thank you again for showing up in numbers, it does help me putting more thoughts out there. I’m currently on track to surpass last years’ readership numbers by 5000 reads so keep showing up and I’ll keep them coming.
Cover by Daniel Torobekov from Pexels