My last blog post some 7 months ago was ‘TDD, where did ‘I’ go wrong‘, this post will revisit some of the statements I made during that post, comments I received, and examine how the change has worked.
So to start off with some background. Since changing our approach to TDD from ‘Method Based’ to ‘API Based’ testing we have written somewhere in the region of 1100 tests. I tell you this as some readers of my first article said things such as ‘You. Are. Doomed. Change is coming‘ or ‘You’re really, really going wrong‘, I hope 7 months and 1100 tests is enough time to say we have given it a good go.
So I want to get straight to the point, changing to API based testing has been FANTASTIC! So lets get into some of the detail.
Refactor, refactor, refactor
A comment on reddit summed up my feelings perfectly ‘If you have to change the tests due to a refactor it wasn’t a refactoring, it was a redesign‘. We now have the freedom to write the tests and any format / structure of code you like (procedural, duplicate, etc.) and then re-factor it after.
This freedom has allowed us to refactor code taking a more object orientated approach, without any risk. This value cannot be overstated!
Business based tests (naming convention)
We use the Roy Osherove’s naming convention, and the change in our test approach has had a surprising effect on our test names. Consider the following setup; we have a public API that saves a users details and as part of this the email must be validated, therefore you might create a class responsible for email validation.
Previously a test may be named something such as ‘SaveDetails_DetailsProvided_EmailValidatorCalled’. This is because the email validator is being tested in isolation and mocked out, however with API testing the name changes to ‘SaveDetails_IncorrectEmailFormat_FormatErrorReturned’. This name is a far better representation of the business requirement, and during future maintenance will be easier to understand.
Considered API Changes
Fixing a bug is usually simple, just add in a test to expose the bug, fix it, and if no tests fail, all is good. If it is not that simple then its likely the business need has changed, and this now becomes obvious. This is important and the change you are making is likely a breaking change, and may lead to a different decision being made.
So it has not all been perfect. A concern I raised initially was large setups, and at the time I wrote it off as not being as bad as first suspected. Some readers warned me of this; ‘It can become quite a setup hell to mock everything…‘, ‘then you’ll have to copy the same setup for the test…‘, etc.
I must say they were right, we did get into this situation on occasion. However, there are some easy ways to reduce this problem, @taimila mentioned one, ‘how I avoid mock setup hell, I must answer with separation of concerns‘. I agree and we try to take this approach, but on occasion we still got into this situation. We managed to reduce the problem by setting up good patterns for unit testing (look forward to a post in the coming weeks).
Number of Tests
Now that all of the tests sit at the public API level, it sometimes feels like we have a ton of tests. I am not sure this is an issue but I wanted to call it out because it absolutely takes some management.
This has been a very significant change and bringing the entire team on board has been a key to its success. We do still have a large number of ‘Legacy’ tests and it’s going to take time to move these over, but all new development has taken the API approach and I would certainly recommend it.