TDD always felt wrong to me. And in fact, a lots of my worst code was written with a TDD-like approach.<p>The basic idea is: read the specs, write the tests, then write the code that passes the tests. The big problem here is that you add a level of indirection, and something may be lost in translation. That is, if the tests are wrong, then your code will be wrong.<p>Since you already have the tests when you start coding, there is a tendency to forget the specs and code the tests, since your goal now is to have "all green". And because if you did it well, it is very easy to test your code, it is tempting to just tweak it until it passes, without you understanding what you are doing. I know because that's the process that lead me to the first mentioned terrible code.<p>For a simple analogy, TDD would be like giving students the test subject before they start studying. Sure, they will get good grades, but don't count on them studying what is not on the test, and some will simply memorize the answers. That's why at school, you almost never have the test in advance.<p>Obviously, I am not against testing, especially automated unit testing, but the core idea of TDD, that is writing the test first, is, I think, a bad idea. The often cited argument is that because you start with a failing test (because you haven't implemented the feature yet), you are guaranteed to actually test something, but I find it to be of limited value, except maybe on very large projets where not implementing a feature at all is something that can happen accidentally.<p>An approach that I think works better is to write the code, write the tests, pass the tests, intentionally break your code, check that the tests fail, revert your changes. Code and tests can be written at the same time, ideally by different people, but if you do thing in order, I would go code first.<p>In fact, I think the only real "advantage" of TDD is that it simply forces you to write tests. It is an anti-corner-cutting technique.