TDD, where did ‘I’ go wrong

I recently watched a talk by Ian Cooper from NDC 2013 titled ‘TDD, where did it all go wrong‘, and it has completely changed how I look at unit testing. This post looks at how I got it so wrong, and the benefits of following Ian’s approach.

Method Based Unit Testing – So Wrong …

I have always unit tested on a method level, mocking any call outside of the method under test. There is without doubt benefits to this approach, for example:

  • Helps identify bugs early on
  • Gives me confidence to re-factor inside the method
  • Helps identify where a change may break existing behaviour inside that method

All of this is great but it I have also experienced and lived with the following issues:

  • Tests are tightly coupled to the implementation preventing  re-factors outside of the method
  • Any large re-factor results in test re-factor, and therefore removes confidence and costs time
  • Test do not cover the ‘wiring’ between classes from public API through all the layers of your logic

A recent series of conversations held by Martin Fowler on ‘Is TDD Dead‘ heard David Heinemeier Hansson talk about ‘Test-induced design damage‘, highlighting that testing can often lead to poor design. I have on repeated occasions made methods public that should be private, virtual when there was no need, moved logic when it was better suited where it was, and made so many more poor design choices to satisfy the need to unit test each individual method in isolation of all others.

I honestly thought these where things I had to live with to get the benefits of unit testing and TDD. Oh how I was wrong…

API Based Unit Testing – So Right …

During Ian’s talk he convinced me that I have misunderstood the term ‘Unit’, when it comes to unit testing. I always thought it meant you should test the smallest unit possible in complete isolation (method based unit testing). However, he stated that a unit test should target a unit of behavior defined by a business need.

With this understanding of a unit there is no longer a need to test the internals, and as Ian states we should test the public APIs. The API represents the contract you have with the business (what your code is expected to do) and is unlikely to change, making it the perfect place for a unit test.

We have moved to this approach in the current project I am working on, and the change/liberation I have felt when re-factoring has been incredible. I can now conduct large scale re-factors without any need to update or alter my tests, and continue with confidence when my test suite passes; all in a reduced amount of time. The benefits of this cannot be understated, it gives developers the ability to work with confidence and freedom, whilst not having the overhead of test management/re-write.

I also believe this approach reduces some of the issues David highlighted in his blog on ‘Test-induced design damage‘. Don’t get me wrong, we still have repositories which exist for the sole purpose of unit testing. I do however feel able to design and implement the core business logic with complete freedom. I also believe in the future the need for repositories can be removed using tooling such as Entity Framework.

I did have some initial concerns about this approach:

  • Won’t this lead to large setups?
  • Won’t many tests fails if you break something, rather than just one?
  • What if code is shared, do I test it twice?

These are all valid points, although now I have tried this approach they are not as bad as it first appeared:

  • The setups have turned out to not be anywhere near the size I expected, because I don’t have to Mock everything.
  • Yes I still think more tests will fail, but who cares. There is a problem and I know about it, that is the most important thing.
  • Yes test it twice, unless it is another public API, then mock it. This allows you to re-factor either of the APIs with confidence.

On a final note, if you are writing unit tests at the method/internal level then watch Ian’s talk, I promise you will not regret it.

UPDATE: This has sparked some really interesting discussions over on Hacker News, reddit programming and coding tags. Worth a read, and maybe another post addressing some of these concerns.

Advertisement

28 comments

  1. Interesting article. I have a question if you have the time to answer it. Let’s assume I have a Web Service that receives a POST request to update a User’s details. Part of the processing of the request involves checking that the request is validly constructed (RequestValidator class), the User is authentic (UserAuthenticator class), and the final update of the User’s details in the DB.

    My question is… Should I not write Unit Tests for the RequestValidator and UserAuthenticator classes because they are not Public?

    1. Thanks. I believe you should unit test them when you test the public API that calls the classes. Giving you freedom to change how you do request validation and simply re run your tests, for example. Hope that helps.

      1. Alex Howle · ·

        OK. So they will be Unit Tested as a by-product of testing the layer just below the HTTP API? Never tested directly I.e. RequestValidator is never the System Under Test?

      2. Absolutely. If you test it directly you with have coupled your tests to your implementation.

  2. Nice post. I have recently come to the same conclution. Unfortunately the misconception of the unit is not rare at all, but I guess it’s part of the learning curve. You must do something too much to find the limits.

    I also blogged about this subject and the other post about how I split my applications into managable units for testing. Maybe you find those interesting. My blog is at http://www.taimila.com

    1. Thanks Lauri, I will be sure to check out your article.

  3. Isn’t it the difference between TDD vs BDD ?
    Recently i’ve participated to a edx courses (CS169x) and they make a clear distinction about it.
    Both techniques aren’t incompatible and have a part in legacy code refactoring.

    1. I don’t believe this is BDD. My understanding is that BDD is more of an approach to TDD, rather than a method of writing the unit test itself. Certainly using BDD you may well end up with the same unit tests as this approach, but how you arrive will certainly be different.

  4. I also watched the session and lived the same ‘Aha’ moment :). Great article! You verbalized perfectly what I have been thinking and feeling since I watched the talk ^_^.

    It is very interesting to hear that you have started applying it right away in your current project. How do you define units? From a public API to a repository?

    There’s another related talk from Steven Freedman that you may like “TDD, that’s not what we meant” https://vimeo.com/83960706

    1. Hi Jaime, glad you enjoyed it. Our unit tests do target the API through to the repository, but I would be reluctant to define that as the ‘unit’. The unit we test is a single piece of functionality that the API is expected to undertake.

      I am really looking forward to watching Stevens talk, thanks for the great link!

  5. Well, if you test from endpoints, leveraging a lot of different methods behind the scene, isn’t it more integration testing rather than unit testing ?

    Also, the main reason I like isolating methods in testing (and while all the problems you mention are real) is to allow to see immediately the cause when a test fail. Isn’t it harder, in your new way of proceeding, to find exactly from where comes a bug / fail ?

    1. For me the difference between unit test and integration test is, that integration tests include external parts like database and maybe web service framework. For me, unit tests mock out those external dependencies of your application concentrating to the actualy application code. No matter in how many classes that “unit” happens to be in. Of course, this is just one definition and there are many others.

      I was also afraid to loose this pinpointing power of class level unit tests. However, if you run your test suite often (after few to dozen lines) you can be sure that if test fails, it’s because of those lines you just wrote. So you know exactly where the issue is.

      1. That’s quite a paradigm shift. I’m not sure I would use it, but this has interesting properties.

        For example, you could use the same test suite for unit and integration testing, injecting mocks in the context of unit testing and just letting it flows in the context of integration testing.

        In my current (I think somewhat conventional) paradigm, I only mock in case of unit testing, so when testing methods individually. One effect of that is that I only run those tests in my development environment, plus the integration test that seems directly related. The full integration suite is only ran on continuous integration server (because they take too much time to run). As a result, if there are regressions on unexpected part of the application, I only see it when I get CI results, and I have to get back on it even if I had already began to work on an other feature.

        With your definition of unit testing, we could run the whole suite every time on development environment, as everything would be mocked out so it would be fast. CI would still have to run to make sure mocking did not caused us to miss problems, but it still catches more problems earlier.

        Now, I suppose the biggest problem is the reason why it is generally recommended not to mock anything in the usual acceptance of term “integration testing” : as there are a lot of classes / things interacting with each other, It can become quite a setup hell to mock everything interacting with database (for example). There are many classes that use database, some of them that use database to instantiate other classes that use database too, etc.

        Did you find a way to solve this, or is there any reason specific to what you work on that makes it a non issue ? (or maybe you’re just fine with it ?)

    2. Can’t answer to the correct comment because we have reached the limit of nested comments. 🙂

      Thanks for taking time to reply. I have worked in projects similar to your description. We wrote tests per public method and executed integration tests on CI, exactly for the same reason. The integration tests took too much time. This also applies even if you make the scope of the unit (in unit tests) a bit larger.

      I have noticed that depending on the application architecture different testing strategies make sense. For example, if you use repository pattern to abstract your data access from the rest of the application, then there is not that many classes that interact with the database directly. So in unit tests you can inject a mock repository as a substitute for the real thing. Using dependency injection patterns helps a lot when you need to mock “far ends” of the call stack. For example, I still mock email sending even in the integration tests.

      To answer your question about how I avoid mock setup hell, I must answer with separation of concerns. I know it’s a boring and meaningless answer, because it’s so abstract. But if you split your application with clear APIs between those parts, then mocking out any of those parts becomes easier. This applies to the data access layer.

      I explain my current understaing of meaningful units in a blog post (http://www.taimila.com/?p=1516) and it makes no sense to rewrite it here. So if you are interested, please take a look!

  6. What do you define as what is part of the public API? Are you referring to test your entry points for user operations? Such as what occurs when they click a button to do X, then only test the X operation, mock the data only, and move on?

    1. Hi Brad. In the web application I work on this is the public methods on the Model / Business logic that are called from the web layer.

  7. […] TDD, where did ‘I’ go wrong (Martyn Frank) […]

  8. You said:

    “With this understanding of a unit there is no longer a need to test the internals, and as Ian states we should test the public APIs. The API represents the contract you have with the business (what your code is expected to do) and is unlikely to change, making it the perfect place for a unit test.”

    You. Are. Doomed.

    Change is coming. You don’t know when it will come, or where it will come from… but it is coming. One small change is all that you need to blow your assumptions up. And with your approach where the “unit” is one tight ball of different requirements and need, your tests will fail you.

    I say this from personal experience, going through this same cycle. It’s great and awesome at the beginning of a feature, but I’ve found you can’t get around complexity – both in test code and production code. If your tests are complex (which they will necessarily be if you’re testing many things), they will fail.

    1. I imagine that what you are referring to is a change of requirements/expected behavior? If so this is what I would class as a rewrite and not a re-factor, and therefore resulting in the need to update or add new tests.There is magic solution here, your expected behavior of you application has changed.

      1. Right, a change of requirements and expected behavior. They *always* change… so why do something that doesn’t work well for those?

        Take the example someone asked above… a working API that is comprised of a request validator, a user validator, and the DB CRUD operation. Three separate things working together for a single purpose, accessed through single API endpoint that you test as a whole.

        To build your first test, you’ll have to build a full, valid request, a valid user request, and then assert that the expected database operation occurred. Then you’ll have to copy the same setup for the test to verify the result the API returns. Then you’ll verify the request validation by tweaking the valid request to make it invalid. Then you’ll verify the user validation by tweaking the user request to make it invalid. Then you’… etc.

        This is somewhat doable at the beginning, which will be enough for us to declare on Twitter that yes!!! This works, we should all do it this way!!! (notice: I’m including myself).

        But then the client will ask for a tweak to the user validation. Or the user will want an email fired after the db operation. Or the user will ask for tiny change that’s so specific to just one scenario, you’ll have a relatively HUGE test for a tiny assert… or you’ll go the Rails Way route in that you won’t even bother testing it… because it’s so insignificant, right? You’ll either have to brute force this painful testing mechanism or give up testing.

        If you want proof of this, look at a wide array of Rails applications built on the architecture that DHH advocates. Martyn — many Rails devs pick-and-choose what to test, making grand rules like “Oh, we don’t test controllers, we test models” or they test their business logic thru the “public api” that tends to be a web page. The tests are a disaster, they’re usually given up by “junior” devs who can’t make sense of it… it’s just a nightmare. But oh, that glorious first couple months, when we went so fast!

        If the changes I mention above are going to cause you to rewrite, you’re going to go pretty slow.

    2. If the requirements / expected behavior of you API changes, the of course you have to update your test suite. There is no way around this what ever testing approach you use! (unless you don’t test of course). If you make the decision to write a test suite in any shape of form, this is the maintenance cost you have to accept.

      You make a valid point about the setup size, and I touched on this in the article. It is something you have to maintain, and with some sensible patterns / approaches I don’t think it is all that bad.

      I can’t see my opinion shifting much even in three months, but we will see 🙂

      1. If your opinion doesn’t shift, I’d consider that a problem. I’ve been doing this for years and I still find reasons to move around. I became frustrated with the “detached” nature of all these classes running around, all TDD’d individually but no “big” test between them. So I started doing BDD focused on the “public API” of an app. I liked that, thought I had figured it out, but the time I was investing in those tests made the TDD’d tests seem redundant, so I started skimping on them… then I started getting change requests that were so microscopic in nature, my “public API” integration tests felt like using a wrecking ball to hammer a nail. So I’d try to go back to unit testing, but you know… no matter how good you think you are at breaking up code, code that wasn’t TDD’d is never quite right.

        Today, I’ve swung back to the TDD’ing small methods and classes to do small things. In lieu of most integration tests, we’ve gone hyper-mad about reporting. Most things that happen in our apps are tracked and reported back to the devs and clients. It beats our frustration about maintaining complex integration tests (we have very few) and about the problems of passing integration tests that still fail on production due to the difference in environment (which was more prevalent than our TDD’d bits failing when put together). If you provide solid data to clients, a lot of concerns go away.

        But good luck… you’ll need it! 😛

  9. Marcel Popescu · · Reply

    “no matter how good you think you are at breaking up code, code that wasn’t TDD’d is never quite right”.

    This is my experience also (and for some reason, I always think “this time it will be different”… argh!). I have also found out that the more precise something is specified (tested), the better. I might start with a couple of integration tests, to give me an idea of how I think the overall system will look, but in my opinion focused tests are better.

    1. “Code that wasn’t TDD’d is never quite right”, I have a couple of problems with this statement. How do you measure ‘right’? And even if you have a method, do you really believe that something TDD’d is always more right than code that was not?

      I would say the more precise the test the more difficult it makes any significant refactor of your code, causing you to change your tests. I guess we will just have to disagree that the more precise the test the better.

  10. […] TDD, where did ‘I’ go wrong by Martyn Frank […]

  11. […] last blog post some 7 months ago was ‘TDD, where did ‘I’ go wrong‘, this post will revisit some of the statements I made during that post, comments I received, […]

  12. If looking mature is simply not what you’re choosing, always
    be positive to get cups that work well using you facial composition – as
    well as your design. Also, multifocal spectacles may need a various
    contact peak or wider.

  13. […] TDD, where did ‘I’ go wrong […]

Leave a Reply to martynfrank87 Cancel reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: