Personally I completely disagree with this, I've never found myself randomly changing code in a desperate attempt to get a test to pass.<p>Maybe it's because I'd been coding for years before I ever tried TDD, but when a test fails, I logically debug the code the same way I would if I wasn't using TDD.<p>As far as I'm concerned having tests just flags possible errors much quicker, and also gives me more piece of mind that my code isn't gonna be riddled with hidden bugs.
The author made one slight mistake: he wrote "there is a tendency to mindlessly modify code" instead of "<i>I have</i> a tendency to mindlessly modify code".<p>Also, it's not like this we haven't seen this kind of behavior decades before the invention of TDD.<p>This is just another example of a craftsman blaming his tools. TDD is not a silver bullet, but no method or tool can serve as an excuse for mindlessly poking around until it works. This isn't limited to programming either.
I don't recall ever reading that just because you have tests, you should no longer understand the processes by which your code functions. Was this something that they've seen happen, or experienced personally?
The "bold statement" is a little too bold. It goes from:<p><pre><code> | writing code in a test-driven way bypasses
| your brain and makes you not think properly
| about what you are doing.
</code></pre>
(Test Driven Development <i>makes</i> you not think properly and <i>bypasses</i> your brain) to:<p><pre><code> | no matter which software development methods
| you use, do not forget to use your brain
</code></pre>
"Just don't mindlessly program."
TDD is good for verifying that your code handles the set of requirements given by the customer - including any edge cases that matter to them. I probably agree that 100% test passes doesn't equal no bugs.<p>Nonetheless, it's still useful! You <i>can</i> still write TD code and use your brain - it is only slightly easier to be lazy (and specifically, lazy in a way you're not supposed to care about, yet.)<p>In the end, production use crash reports will reveal any bugs that matter in the system (if any), and you can write new tests for those extra cases and make the code pass again. Combined with the rest of Agile (sorry,) i.e. fast release cycles and so on, this isn't a road block.
I think the problem mostly spans from the "do the simplest thing that could possibly work"[1] methodology that some practitioners of TDD advocate over thinking about the problem and solving it properly.<p>[1]<a href="http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.html" rel="nofollow">http://c2.com/xp/DoTheSimplestThingThatCouldPossiblyWork.htm...</a>
I've always viewed TDD as a process that works for <i>some</i> people. It's always important to remember that people learn, develop and think differently. If TDD works for you, great. But do not force it upon other people, as it may not work for them.<p>(This isn't to say that unit tests are bad, but rather writing tests first may not benefit all people)
This sounds a bit like "we don't need no stinking testing", but I know the author is trying to hit at a deeper point. I only wish he had done better.<p>One of the problems here is language: TDD as a general concept can cover everything from high-level behavioral testing to a method-by-method way to design your program. There's a big difference between those two!<p>In general, of course, programming is balancing what the program is supposed to do with how the program is constructed. That's true whether you have TDD in the mix or not.
I'm inclined to agree that it is hard to create an algorithm using tdd (for example Dijkstra's algorithm). But "the example" mentioned in the post is not grounded. It would be nice if someone had a real-world example to back up this claim or else it is very easy to bring up the argument that the author is not applying tdd correcly
I find TDD to be useful in two cases:<p><i>1. When I already know what I'm doing and it's just a matter of coding what's already in my mind<p>2. When I'm writing in a dynamically typed language, it forces me to be not lazy and have adequate test coverage since I don't have compile time type safety</i><p>I do less of TDD when dealing with a statically typed language and/or when I'm working in an exploratory mode. TDD doesn't help me when I'm just trying out different things to get going.<p>The thing that pisses me off is when people don't realize that EVERY technique has caveats and try to promote it as a golden rule - a lot of "agile" consultants preach TDD as the golden grail for writing code without any bugs.<p>EDIT: grammar
<a href="http://www.dalkescientific.com/writings/diary/archive/2009/12/29/problems_with_tdd.html" rel="nofollow">http://www.dalkescientific.com/writings/diary/archive/2009/1...</a> is a much better article about the problems with TDD.
Hacking code to fix problems isn't unique to TDD. I see people do it all the time to codebases that don't have tests.<p>If your goal is to fix this behavior, go for the root causes. TDD isn't a root cause for this particular problem.
I've been mixing in TDD and BDD for the last 1.5 years of my 11 year coding career. I can't think of any reason not to test except for laziness and someone's unwillingness to truly use their brain to evaluate it's value.<p>Contrary to this article, one great reason is that TDD/BDD allows me to make refactors and major changes and know whether or not I broke something. I find it passe to have the opinion of this article.<p>A perfect example for TDD/BDD is a complex REST API with dozen of end-points and refactoring a piece of the authentication system. How do I know if I broke something or introduced a bug?<p>My experience is that most developers do not test and this is exactly the kind of way complex bugs get introduced. You actually make the job more difficult on yourself because instead of knowing YOU broke something, a bug gets introduced and you spend more time tracing the cause. I have worked at many places that have this obnoxious cycle of deploying, breaking, deploying, breaking.<p>It is irritating to see articles like this pop up because it's not like it's a school of thought or a religion. It's a purposeful tool that can and will save you time and effort and probably impose a few good design practices along the way. I'm not saying shoot for 100% coverage, fuck, I'm happy just knowing a few complex pieces are working. And I don't always think it's a good idea to design API's from the tests, especially when you are experimenting and researching.
I think this is a more general problem in programming, namely "Programming by Coincidence" [1]. Some people just tries to solve the problem without actually thinking about it, but just tries match the output specification.<p>[1] <a href="http://pragprog.com/the-pragmatic-programmer/extracts/coincidence" rel="nofollow">http://pragprog.com/the-pragmatic-programmer/extracts/coinci...</a>
There are papers out there that show better results with TDD. Here is one:<p><a href="http://www.infoq.com/news/2009/03/TDD-Improves-Quality" rel="nofollow">http://www.infoq.com/news/2009/03/TDD-Improves-Quality</a><p><a href="http://research.microsoft.com/en-us/groups/ese/nagappan_tdd.pdf" rel="nofollow">http://research.microsoft.com/en-us/groups/ese/nagappan_tdd....</a>
This article misunderstands TDD completely. In TDD, <i>the tests are your specifications</i>. Therefore, <i>any code</i> that passes the tests is formally correct - even though it should always be minimal (YAGNI).<p>In fact, TDD is not simply "tests first". It is: write ONE test, make it pass with the MINIMUM amount of code, refactor, loop.
FTA: <i>Algorithms must be understood before being modified...</i><p>I would add to this that algorithms must be understood before being tested, something with which I suspect most TDD proponents would agree, and which would dispense with the need for the rest of the article.
Could we please stop arguing? This back-and-forth with absolutes is akin to useless political campaigning. <a href="http://blog.8thlight.com/uncle-bob/2013/03/06/ThePragmaticsOfTDD.html" rel="nofollow">http://blog.8thlight.com/uncle-bob/2013/03/06/ThePragmaticsO...</a><p>(More specifically, read everything from the "The Pragmatics: So when do I <i>not</i> practice TDD?")
I agree -- I've found myself in that exact case that he described (mindlessly adding and subtracting one on various loop indices until it worked) more than once.
TDD in theory is a great idea. In practice it is dreadful.<p>Because what has happened is that the obsession with code coverage has meant that developers create a whole raft of tests that serve no real purpose. Which due to TDD then gets translated into unworkable, unwieldy spaghetti like mess of code. Throw in IOC and UI testing e.g. Cucumber and very quickly the simplest feature takes 5x as long to develop and is borderline unmaintainable.<p>It just seems like there needs to be a better way to do this.