TDD, where did ‘I’ go wrong – 7 Months on

My last blog post some 7 months ago was ‘TDD, where did ‘I’ go wrong‘, this post will revisit some of the statements I made during that post, comments I received, and examine how the change has worked.

So to start off with some background. Since changing our approach to TDD from ‘Method Based’ to ‘API Based’ testing we have written somewhere in the region of 1100 tests. I tell you this as some readers of my first article said things such as ‘You. Are. Doomed. Change is coming‘ or ‘You’re really, really going wrong‘, I hope 7 months and 1100 tests is enough time to say we have given it a good go.

So I want to get straight to the point, changing to API based testing has been FANTASTIC! So lets get into some of the detail.


Refactor, refactor, refactor

A comment on reddit summed up my feelings perfectly ‘If you have to change the tests due to a refactor it wasn’t a refactoring, it was a redesign‘. We now have the freedom to write the tests and any format / structure of code you like (procedural, duplicate, etc.) and then re-factor it after.

This freedom has allowed us to refactor code taking a more object orientated approach, without any risk. This value cannot be overstated!

Business based tests (naming convention)

We use the Roy Osherove’s naming convention, and the change in our test approach has had a surprising effect on our test names. Consider the following setup; we have a public API that saves a users details and as part of this the email must be validated, therefore you might create a class responsible for email validation.

Previously a test may be named something such as ‘SaveDetails_DetailsProvided_EmailValidatorCalled’. This is because the email validator is being tested in isolation and mocked out, however with API testing the name changes to ‘SaveDetails_IncorrectEmailFormat_FormatErrorReturned’. This name is a far better representation of the business requirement, and during future maintenance will be easier to understand.

Considered API Changes

Fixing a bug is usually simple, just add in a test to expose the bug, fix it, and if no tests fail, all is good. If it is not that simple then its likely the business need has changed, and this now becomes obvious. This is important and the change you are making is likely a breaking change, and may lead to a different decision being made.


Large Setup

So it has not all been perfect. A concern I raised initially was large setups, and at the time I wrote it off as not being as bad as first suspected. Some readers warned me of this; ‘It can become quite a setup hell to mock everything…‘, ‘then you’ll have to copy the same setup for the test…‘, etc.

I must say they were right, we did get into this situation on occasion. However, there are some easy ways to reduce this problem, @taimila mentioned one, ‘how I avoid mock setup hell, I must answer with separation of concerns‘. I agree and we try to take this approach, but on occasion we still got into this situation. We managed to reduce the problem by setting up good patterns for unit testing (look forward to a post in the coming weeks).

Number of Tests

Now that all of the tests sit at the public API level, it sometimes feels like we have a ton of tests. I am not sure this is an issue but I wanted to call it out because it absolutely takes some management.


This has been a very significant change and bringing the entire team on board has been a key to its success. We do still have a large number of ‘Legacy’ tests and it’s going to take time to move these over, but all new development has taken the API approach and I would certainly recommend it.




  1. […] TDD Where Did ‘I’ Go Wrong – 7 Months on […]

  2. […] my previous post I mentioned that large unit test setups can be difficult to maintain / understand. This problem […]

  3. I remember starting TDD… writing tests like I read in Roy Oscherove’s book, the pain of learning testing in general… then things got a little easier, but I started to feel different pains testing at the unit level. So I thought, “Oh, I’ll test at a higher level!” And those tests were painful to start, but then they got easier… and then I started to feel pains testing at that level. So I started doing both, whic.. yadda yadda yadda.

    Lesson learned: There is no “level” that’s right for testing. You can start at a high level, but you’ll find the need to test at a lower level. Or you’ll start at a low level, and you’ll find that you need some tests for the integrations… or surprisingly, you’ll find that what thought was a “unit” is actually multiple units. You can experience this pain and try to find the next best answer, but it’s not going to come. I’ve found that the best way to handle this is to stop trying to force what I think is best, and just let the next simplest test drive my next simple solution. Keep your investment low so you won’t think twice about throwing it away and starting again.

    Of course, it’s easy for me to say this vague thing, but it’s not so vague the more years of TDD you gain. I’m still learning and adjusting how best to test my software. The only constant I’ve seen so far is that, no matter what level you think you’re on, it should be easy for the next developer to write the next failing test.

  4. Nice of you writing these things down, there are too many people testing at the wrong level. There might be situations where it makes sense to test a method though, when implementing a really hard algorithm for example.

    I come to like event sourcing and CQRS because it helps me focus on writing the test at the right level. With a system built around those concept one can write tests like: (just a sample).

    Basically all my tests follow that pattern and I treats the application as a pure function (which I wrote about here:

    Great of you sharing!

  5. First, I want to point out a minor typo: ‘SaveDetails_InorrectEmailFormat_FormatErrorReturned’. I think you mean “Incorrect”.

    Second, I want to say that this video also greatly influenced the way I write automated tests, too. I’ve written tests like this for a hobby project I worked on for over a year and a half and I gotta say it worked out great. It got to the point where I could practice Continuous Delivery.

    I would say my greatest difficulty is having confidence in tests that mock out the remote dependencies. e.g., if I mock out the ORM, I fear that my code could fail in production when the real ORM is used. How did you gain confidence in these scenarios?

    The reality is I’d lack confidence in all the mocked dependencies if I were mocking every potential dependency so this way of writing tests is actually still better in this context.

    I wrote a blog article about my experience with this way of testing myself. It’s called “How to write your automated tests so they don’t fail when you refactor” and you can read it here:

    I’d love to get your opinion on it, thanks.

    1. Thanks for sharing your thoughts, and pointing out the typo.

      I was recent discussing the exact problem you are describing about mocking out ORM, and it passing test but failing against the DB. There are two solutions the team I work in employs here, manual testing and automated UI testing. They are both very expensive, but for us they are part of our development process anyway. Sorry I couldn’t be much help.

      I will take a look at your blog and share my thoughts.

      1. Let me provide an alternative I heard from Gary Bernhardt. He separates all the business logic from the ORM code. This business logic creates state for the ORM to use, but the business logic isn’t aware of the ORM. The ORM logic then becomes a thin shell that is easy to thoroughly test with a few end-to-end tests.

        I’ve never tried this, so it’s just theory.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: