Great article and I'm somewhat surprised at how often I have to repeat the things mentioned in it -- heck, sometimes it's difficult to even convince developers of the value of unit testing, in general.<p>I'm a huge advocate of automated testing and with the available tools, like docker, it's relatively painless to get the pieces together to sort out automated testing. Often the tooling you use to enable automated testing is tooling you end up needing <i>anyway</i> -- it's dual purpose. Before the "first run" of a set of code, I'll create Dockerfiles that make up a complete, local, development instance of an application along with some boilerplate tooling that I include to make debugging easier on me. When setting up the production build, the final version is usually <i>this same environment</i> with fewer lines in the file. Because my environments tend to be similar, I have a zsh script that strips out lines in the Dockerfile to get me 90% of the way to a production container.<p>For me, it's <i>always</i> worth it. I came to this conclusion after spending a few months <i>forcing</i> myself to test rigorously[0], starting with unit tests written often and early and ending with a small number of integration tests and a much smaller number of end-to-end tests. I don't find any of these particularly difficult to write.<p>The benefits, however, are vast: (1) Avoiding the debugger time-sink: The #1 thing that I always come back to is that I generally end up <i>never</i> having to fire up a debugger. I noticed that every time I encountered a bug in code that was poorly covered, the first instinct was to attach a debugger and peek at locals to see what was going on. This rarely resulted in spending less than an hour troubleshooting. Sometimes you get lucky and you find more than one issue in that debugging session, but often it landed in at burning an hour on ever bug and way too often it was an hour spent debugging <i>production code</i> and the bug was <i>customer impacting</i>[1]. At the same time, it's rare for an automated test to have a time cost that high. (2) Refactoring - Since "premature optimization is the root of all evil", that necessarily means that a performance bug is going to involve injecting complexity into a running codebase and this often comes with high-impact refactoring. Unit tests, specifically, are incredibly helpful here. This is often an argument <i>against</i> integration/end-to-end test automation since refactoring regularly breaks these brittle tests, however, I've found in practice that this isn't the case at least half the time. Of the times that it <i>does</i> affect those tests, the practice of refactoring can surface subtle bugs (on a few occasions it surfaced a subtle race condition that might have been missed if a few of the integration tests covering a subset of the functionality hadn't broken). (3) Design - more for integration and unit tests, thinking about testing while writing code can result in a less brittle design[2]. On integration tests, it means writing SQL scripts and migrations to ensure that a fresh environment can be spun up on-demand instead of using GUI tooling (or, using the GUI tooling to generate said scripts/migrations). (4) Build automation - I'm somewhat surprised at how often I encounter a customer project where I have to follow a 20 step process to get things functional in a development environment. It seems like if CI isn't involved, people figure a README.md with a mess of shell commands and button clicking is OK. Scripting out environment configuration and build was already one of the first things I did when I began running the code I'd written, however, I find I no longer have to argue in favor of this when testing is involved -- everyone wants a single command to execute tests and once integration and end-to-end tests are involved, it just makes sense to add standing up the docker parts, too[3].<p>I get why there's resistance to doing these things -- getting people to simply write <i>any</i> automated tests seems to be the most difficult hurdle. Throw in "learn docker" and other technologies to make automated end-to-end testing easier and the barrier is even higher. And hey, there are some times that the time spent writing tests doesn't pan out to a time savings. For the unconvinced, I can only <i>strongly</i> recommend: try it on your next big project. There's no need to change the way you think -- skip TDD if it doesn't work for you -- but write unit tests over your, public facing surface area. Write integration tests over the most important parts of your codebase -- those which if a bug were to be encountered would have the greatest impact on reliability. Write a few end-to-end tests of major functionality. Keep track of the time spent from the first line of code to the final, released, product. If your experience is like mine and the 4 different teams I've done this exercise with, you'll end up doing things this way from that point forward. If not, you have a gift that I lack -- you write incredibly bug-free code "the first time", every time. :) Then try switching to a single 1080P monitor[4].<p>[0] I tend to code first, test second. Though, on paper, TDD looks like a good idea in that it forces you to think about the desired outcome and write methods in a manner that guarantees the ability to test, I don't find it difficult to write things in this manner from the beginning and I find it more natural to write the actual code first and I'll often write a large footprint of code before writing the first test -- I don't find as much value as others in frequent, instant, feedback but I recognize the value that others find in that.<p>[1] And bugs resolved during an outage are duct-tape "there, I fixed it" kind of repairs.<p>[2] Provided you don't like having to figure out new and creative ways to mock complex god-objects/routines. Maybe you dig that sort of thing?<p>[3] I reload enough that I have a script that automates <i>installing and configuring</i> docker in the most common manner if it's missing or the configuration isn't complete.<p>[4] <a href="https://news.ycombinator.com/item?id=14482587" rel="nofollow">https://news.ycombinator.com/item?id=14482587</a>