Commenting on TDD stories here is historically a bad practice, but i'll add my input here.<p>I have never let my teams go full TDD. The reason is that in all my experience, TDD sacrifices a lot of velocity for the sake of automated tests. When i hear about the reduction in total bugs injected, it is a "duh" moment. The fastest way to make a team inject 30% fewer bugs is to have them write 30% less code. That isn't snarky, it's true.<p>Automated testing is one of the many tools available to software engineers. And it is a valuable one. Unfortunately, TDD is too much of a good thing. It relies so heavily on automated testing that it ventures far into the realm of diminishing returns.<p>Once, in an argument about TDD, i said it was akin to having someone build a shed. But upon checking in on them, you saw they were using a hammer to smash screws into boards. When you ask them what they are doing, they tell you it is Hammer Drive Construction. It is perhaps overly harsh, but it reinforces the point: tools have a place. Automated tests really shine on mission critical logic that does not get rewritten often. Use it where it makes sense. I wouldn't recommend using it ubiquitously.<p>Then again, i also recommend having fun coding. So i suppose the actual message here is: do what makes you successful, not what comments or studies say.
I'm definitely in the TDD-is-not-a-one-size-fits-all programming style camp and I'm glad to see a study that supports that conclusion. I was at Railsconf when DHH said his bit in 2014. My office and I followed the subsequent debates between him and Kent Beck (since my dev group was largely pro-TDD). Lots of anecdotal arguments. It's nice to see some more quantitative data on this!<p>In my programming experience I've found that I prefer to write tests AFTER I do development of a new feature. Oftentimes the implementation is in such flux that continually updating the test as I go along is tedious and kills the creative flow.<p>However, when it comes to fixing bugs in existing software, I find it more helpful to write a test that duplicates the bug FIRST, then code the solution.<p>If anything, the reason to recommend TDD is simply to enforce writing tests to begin with. It's so easy to get a feature working and gloss over testing it.<p>EDIT: What's up with liquidise's statement about commenting on TDD stories being bad practice? Do the TDD fanatics downvote to hell everything anti-TDD?
Before I go in, I will state that in 99% of the times I'm a TDD hater. Actually I don't even like writing tests after the fact because I just like to build and move on.<p>I could never understand why the ruby on rails tutorial insisted on walking newbies through TDD and skipped all chapters where they start talking about tests when I started learning rails. I still think it's a bad idea to make newbies do all the weird TDD stuff when they don't even know how to build something.<p>I'm so opinionated about this that most people around me know this. And in most cases it works without needing to write any tests. And even if something fails, I can quickly patch it. As long as I wrote the app in a nicely modular way, I've not had much problem.<p>That said, right now I'm working on writing a JS library. And believe it or not, I AM doing TDD right now. I can't believe it myself.<p>I think in cases where the logic involves a lot of intricate details, it's impossible for me to write something without writing tests. I'm not talking about simple web apps. I'm talking about stuff like: template engine, parser, etc.<p>My current setup: I write a test and document it before I write a function. That way I don't get carried away while implementing and know exactly what I'm trying to build. Then I write another function that utilizes that function I just wrote, and so forth. This way I know when the next function doesn't work for some reason I know exactly where something went wrong. Instead going back and debugging every single function used along the way, I know it's the most recent one that's causing the problem.<p>So my conclusion: you probably don't need to write tests for all your stuff, but there are indeed cases where you will NOT be able to proceed without writing tests.
Oh man, it is really funny that he ends it with telling people to read the study.<p>To clarify the linked study is attempting to replicate <a href="https://dl.acm.org/citation.cfm?id=1070834" rel="nofollow">https://dl.acm.org/citation.cfm?id=1070834</a>, THE seminal study in Test Driven Development. Well to be more precise it was replicating an existing replication of that study which failed to replicate the original results. They were trying to modify the design so as to account for issues in the experimental design that may have led to the replicated study being inconclusive.<p>This is significant because if you were not aware of the failed replication, and believed that TDD was supported scientifically as more productive because of that original study, then you SHOULD be reconsidering its place in your development process. If that isn't the case your opinion is unchanged by these particular results(even in the article inspiring this one the author admits that their opinion was already based on a much more thorough analysis, see: <a href="http://neverworkintheory.org/2016/10/05/test-driven-development.html" rel="nofollow">http://neverworkintheory.org/2016/10/05/test-driven-developm...</a>).<p>Now what I want to know is why people insist on writing articles in this awful conversation format. It wastes a lot of words to make a simple argument poorly.
tl;dr = someone did a study that used a methodology that confirmed that working in small chunks and writing tests as you go is good, but that it's not very important if you write the tests before the small chunk of code or after the small chunk of code.
In all discussions about TDD, it is important to distinguish between having having an automated test suite for your code which is run frequently, and writing your code test <i>first</i> - which is what TDD is, by definition.<p>It is possible to advocate for the former, while thinking the latter is consultantware snake oil. (my position, fwiw)
The author was going somewhere when he began writing about what a developer is thinking about, but, perhaps because he was focused on vindicating TDD, he did not arrive there.<p>A developer who is writing unit tests must have a good idea of the purpose of the target of the tests, so she is thinking about requirements. Furthermore, if she is writing unit tests for small components (which will often be the case on account of everything being done in short cycles) then a lot of that purpose is contingent on other aspects of the design and how it is all supposed to work together: in other words, she is thinking about design.<p>If you don't spend some time thinking ahead about big-picture requirements and design issues, you are in danger of going a long way down a dead end.
I thought that TDD morphed into ending up with a regression/integration/conformation test suite instead of using tests as specifications written prior to writing products. And even 100,000s of tests won't help you in very advanced applications like cloud/cluster infrastructure as sometimes it's simply too difficult if not impossible to come up with tests (imagine observer effect when your cluster deadlock happens only in certain rare nanosecond windows and adding a testing framework will make you miss those windows and the problem never happens) and people with mental capacity capable of writing them (e.g. Google/FB-level) are better utilized in writing the product itself.
TDD presents a paradox that requires split-brain thinking: when writing a test, you pretend to forget what branch of code you are introducing, and when writing a branch, you pretend to forget you already knew the solution. It is annoying as hell.<p>You CAN indeed cover all your branches with tests afterwards. You can even give that a fancier name, like "Exploratory Testing". Of course it may be more boring or tedious, but is a perfectly valid way to ensure coverage when needed.<p>TDD was great for popularizing writing test first; However I much prefer the methodology called CABWT - Cover All Branches With Tests. Let the devs choose the way to do it, because not everyone likes these pretend games.
Author is only partially right about TLD being as "doing TDD in your head", since it's (at least for me) in a much more abstract form of a general idea, a concept, of what I want to achieve. When using TDD you need to come up with the very specific results that you will test and you need then to implement those specific tests, to the last line of code. This means that if you make any changes to the logic afterwords, you need to throw away your pre-written tests and write new ones, the time spent on writing them was wasted. TLD is much more flexible and easier to update, no code is thrown away if you change something. Before I start I just need to decide what I'm trying to solve with my current block of code, and then I later write a test to check if I did it properly. Then I do the next block of logic, and the next test. Since code blocks are directly related to the steps in my logic, it's very natural to come up with the tests for them, just test if the things work as you planned it. If in the middle of that work I realize that I need to do something in a completely different way, there's no pre-written tests, so no time was wasted on coding tests that were never going to be used. And, at least to me, this kind of situations happen a lot, I often refactor and improve things as I work on them, so for me TLD is much more suitable approach.
tl;dr the recent studies proved that that you're testing first or last doesn't matter, provided you're frequently flipping between writing a test and writing code.<p>The author thinks that TDD is preferable because it helps you maintain discipline.<p>I personally think it's worthwhile besides that because it means you design the API before implementing, meaning it is cheaper to fix API design mistakes. IIRC this aspect wasn't actually tested in the studies (API signatures were given up front).
Let's say I have a new theory, called Understanding Driven Development. The system says:<p>It's a bug if someone needs to change code and they, at any moment, see code they don't understand. Stumbled into the wrong place? Bug filed for better notes on organization. The code you need to touch not understood? Understand what you see before you make a single change. If you change code and don't update docs, or documentation and code out of sync? It's a bug, and changing one to match the other _without detailed understanding_ is a bug too!<p>Now, that seems reasonable. And if a study comes out and says people can't make program changes faster, on average, when participants are given a bit of code identical, but with more (accurate and non-trivial) comments, that doesn't mean UDD doesn't work. It doesn't test it on real, full size applications. The code was the same, despite clarity of code is one of the goals of UDD -- one of the core claims is that UDD gets you better code to begin with. It focuses on a tiny test of something not necessarily core to the UDD mindset.<p>But it's evidence that at least one claim I've made is false. In fact, that study would be enough for me to throw that idea set into the garbage.
I work at ThoughtWorks. TDD is central to everything that we do. That said, like anything else, TDD done to an extreme is probably a bad thing (too much time spent on tests rather than implementation) and it not being done at all is also usually bad (too much time fixing bugs that could have been caught by tests written beforehand).<p>Balance is key.
I prefer to rather then write tests plan out the interactions between all components in large projects. This will show you how all the pieces interact and what cases to need to handle in each functional unit. After this, I sit down an write all the code.<p>After I know the organization of the source, I write out each functional unit of the code one at a time. As I go, I write each bit of test code for my source. After this I integrate every function unit.<p>If a change is needed, I go back to the drawing board and find a better overall organization. This happens often due to either performance constraints or the need to abstract a section further.<p>After this I'd consider embedding a unit test suite.<p>Works great for small to medium projects.
Previous 300+ comment thread which referenced the actual paper, not a blog post about it:<p><a href="https://news.ycombinator.com/item?id=12740456" rel="nofollow">https://news.ycombinator.com/item?id=12740456</a>
For those like me who enjoy HN but aren't s/w developers:<p><a href="https://en.wikipedia.org/wiki/Test-driven_development" rel="nofollow">https://en.wikipedia.org/wiki/Test-driven_development</a>
Careful -- in these studies the subjects are writing tests before writing code.<p>In practice there are 'test-heavy' devs who use factory data and the test suite to run skeleton code with crashpoints, and switch actively between test and imp files.<p>This has tests & implementation being written in parallel vs strict TDD which has us finishing tests before writing program logic.<p>Most test suites depend not just on functional requirements but also on implementation details, so it seems obvious that tests-before-logic development is inefficient.
I must be one of the very few people who can write working and mostly bugless code and without writing any kind of test. Writing tests feels like the most wasteful and possibly harmful thing to me (like by people forcing dependency injection etc. where otherwise unneeded).<p>I don't really know what to think of the situation? Is this how it has always been? Do most software engineers really have no idea what they're doing?
You can also write "Methodology X doesn't work always". All methodologies work well for some situation and for others they don't. In my view TDD is great for a lot simple things and algorithms and you can structure your code in a way that most of the code is inherently testable. But when things are so complex that you don't even know the correct architecture upfront, TDD is a killer.
That blog post could pretty much apply the same arguments to itself. And who knows if Bob's experience is simply correlation not causation. Perhaps Bob is just a smart, meticulous engineer, and it wouldn't matter how he went about his dev work, the quality may be good regardless.
Test-first was promoted as being the secret sauce that made TDD so much better than anything else, so this is something of a qualified vindication, but I do think (from my own experience) that writing down what I am thinking does help me see flaws that I had overlooked.
good article; would like to add that a study based on "21 graduate students" is hardly representative of the software developer population...
(to nobody in particular)<p>Please the article before commenting.<p>It's not totally clear from reading some of the comments that people have actually read the article.<p>It's a good one, please do.