I recently watched a talk by Ian Cooper from NDC 2013 titled ‘TDD, where did it all go wrong‘, and it has completely changed how I look at unit testing. This post looks at how I got it so wrong, and the benefits of following Ian’s approach.
Method Based Unit Testing – So Wrong …
I have always unit tested on a method level, mocking any call outside of the method under test. There is without doubt benefits to this approach, for example:
- Helps identify bugs early on
- Gives me confidence to re-factor inside the method
- Helps identify where a change may break existing behaviour inside that method
All of this is great but it I have also experienced and lived with the following issues:
- Tests are tightly coupled to the implementation preventing re-factors outside of the method
- Any large re-factor results in test re-factor, and therefore removes confidence and costs time
- Test do not cover the ‘wiring’ between classes from public API through all the layers of your logic
A recent series of conversations held by Martin Fowler on ‘Is TDD Dead‘ heard David Heinemeier Hansson talk about ‘Test-induced design damage‘, highlighting that testing can often lead to poor design. I have on repeated occasions made methods public that should be private, virtual when there was no need, moved logic when it was better suited where it was, and made so many more poor design choices to satisfy the need to unit test each individual method in isolation of all others.
I honestly thought these where things I had to live with to get the benefits of unit testing and TDD. Oh how I was wrong…
API Based Unit Testing – So Right …
During Ian’s talk he convinced me that I have misunderstood the term ‘Unit’, when it comes to unit testing. I always thought it meant you should test the smallest unit possible in complete isolation (method based unit testing). However, he stated that a unit test should target a unit of behavior defined by a business need.
With this understanding of a unit there is no longer a need to test the internals, and as Ian states we should test the public APIs. The API represents the contract you have with the business (what your code is expected to do) and is unlikely to change, making it the perfect place for a unit test.
We have moved to this approach in the current project I am working on, and the change/liberation I have felt when re-factoring has been incredible. I can now conduct large scale re-factors without any need to update or alter my tests, and continue with confidence when my test suite passes; all in a reduced amount of time. The benefits of this cannot be understated, it gives developers the ability to work with confidence and freedom, whilst not having the overhead of test management/re-write.
I also believe this approach reduces some of the issues David highlighted in his blog on ‘Test-induced design damage‘. Don’t get me wrong, we still have repositories which exist for the sole purpose of unit testing. I do however feel able to design and implement the core business logic with complete freedom. I also believe in the future the need for repositories can be removed using tooling such as Entity Framework.
I did have some initial concerns about this approach:
- Won’t this lead to large setups?
- Won’t many tests fails if you break something, rather than just one?
- What if code is shared, do I test it twice?
These are all valid points, although now I have tried this approach they are not as bad as it first appeared:
- The setups have turned out to not be anywhere near the size I expected, because I don’t have to Mock everything.
- Yes I still think more tests will fail, but who cares. There is a problem and I know about it, that is the most important thing.
- Yes test it twice, unless it is another public API, then mock it. This allows you to re-factor either of the APIs with confidence.
On a final note, if you are writing unit tests at the method/internal level then watch Ian’s talk, I promise you will not regret it.