I think this is a great explanation of a lot of the obvious pitfalls with "basic" TDD, and why so many people end up putting in a lot of effort with TDD without getting much return.<p>I personally have kind of moved away from TDD over the years, because of some of these reasons: namely, that if the tests match the structure of the code too closely, changes to the organization of that code are incredibly painful because of the work to be done in fixing the tests. I think the author's solution is a good one, though it still doesn't really solve the problem around what you do if you realize you got something wrong and need to refactor things.<p>Over the years I personally have moved to writing some of the integration tests first, basically defining the API and the contracts that I feel like are the least likely to change, then breaking things down into the pieces that I think are necessary, but only really filling in unit tests once I'm pretty confident that the structure is basically correct and won't require major refactorings in the near future (and often only for those pieces whose behavior is complicated enough that the integration tests are unlikely to catch all the potential bugs).<p>I think there sometimes needs to be a bit more honest discussion about things like:
* When TDD isn't a good idea (say, when prototyping things, or when you don't yet know how you want to structure the system)
* Which tests are the most valuable, and how to identify them
* The different ways in which tests can provide value (in ensuring the system is designed for testability, in identifying bugs during early implementation, in providing a place to hang future regression tests, in enabling debugging of the system, in preventing regressions, etc.), what kinds of tests provide what value, and how to identify when they're no longer providing enough value to justify their continued maintenance
* What to do when you have to do a major refactoring that kills hundreds of tests (i.e. how much is it worth it to rewrite those unit tests?)
* That investment in testing is an ROI equation (as with everything), and how to evaluate the true value the tests are giving you against the true costs of writing and maintaining them
* All the different failure modes of TDD (e.g. the unit tests work but the system as a whole is broken, mock hell, expensive refactorings, too many tiny pieces that make it hard to follow anything) and how to avoid them or minimize their cost<p>Sometimes it seems like the high level goals, i.e. shipping high-quality software that solves a user's problems, get lost in the dogma around how to meet those goals.