> If you want to reduce your test mass, the number one thing you should do is look at the tests that have never failed in a year and consider throwing them away. They are producing no information for you — or at least very little information. The value of the information they produce may not be worth the expense of maintaining and running the
tests.<p>This seems.. entirely unreasonable to me. Just because some area of code is not touched frequently doesn't mean that we should throw out tests pertaining to that section of code. Can someone justify this to me? This seems like a _terrible_ mistake.
In my previous job I was maintaining a big project which originally had zero unit-tests, it was apparently working well but once I started adding TDD I found several security issues, edge cases, and memory overflows; by the end of my contract I had written more than +2,800 unit and integration tests and the code coverage was still low at around 60% but I am very confident that new developers are going to take ownership of the project from the beginning without much training just because I wrote those tests. I consider testing an useful way to allow new contributors to join the development process because they will not be afraid to touch the code because the tests will throw a warning when something breaks after any modification, without tests they would surely have to spend more time checking the details of their modifications more than integrating into the team.
I agree that unit testing is a waste if the project requirements change quickly. Unit tests can take ages to write and if the project requirements are changing constantly, then you keep having to update them and it slows down development and kills productivity - I've seen this happen many times in previous companies.<p>Unit tests only make sense for components of a system which are BOTH critical and stable.
I think that integration tests are often way more useful for most projects.
Reading over the comments, a lot of people seem to be concerned that unit tests compromise their ability to change code quickly as requirements change - they find themselves spending too much time updating tests instead of doing real implementation work.<p>In the experimental stages of a project, I'd buy that. But once a project has matured to the point where it's working and the architecture is broadly in place, requirements changes are <i>usually</i> not so fundamental that there is no resemblance between the old and new requirements. If you're finding yourself having to rewrite large swathes of unit tests when the requirements change, you need to ask yourself if the real problem is that your code is poorly factored. If you're breaking down your code into simple, independent, cleanly composable pieces, you'll find that changing requirements poses much less of a testing burden.
The biggest problem with unit tests is they are abused as a metric for code quality. Pretty code coverage graphs and percent numbers are easy to present to managers. And outsource firms love these because it adds pointless work they can justify as "code quality".<p>But what happens is the unit tests become filled completely useless logic. Like a test with only mock objects that just tests if a method can be called. Useless but good for padding those code coverage numbers. And ironically it greatly decreases code quality. Because when there's these useless unit tests covering everything, it makes refactoring very difficult. And so no one refactors to improve the code because its too much work.
For the most part I find myself writing unit tests to save time debugging. With a unit test I can create a limited context for my module to execute within instead of having to manually run through through several steps in the larger application to test out some piece of functionality.
I think most unit testing is a waste because it duplicates what a good type checker would do for you. I'm using TypeScript now, and after a break from using typed languages, it's a huge breath of fresh air.<p>Much of the code I write doesn't get unit tests at all, because it's simple enough that <i>it won't ever fail</i>. Refactoring major blocks of code is safe, even without unit tests, because the type checker ensures that everything is wired up in a sane manner when you're done. Good design can obviate the need for many unit tests.<p>When people talk about test code coverage in JavaScript/Ruby/Python, I think the main reason they want close to 100% coverage is that many runtime failures in those languages occur because some line of code somewhere is accessing a type incorrectly. That doesn't happen if you're using static typing.<p>If you've got some complex logic, making sure it works using unit tests is fine. I still do that with anything I consider non-trivial. But if you've got a really simple function that obviously works, and TypeScript ensures the function will always get the types it's expecting, writing tests to ensure it will keep working forever is just a waste of time, unless it's to verify for your own sake that the "obvious" function does what you think it should. But in that "TDD" case, keeping the test around just makes the code base more brittle, since if you decide you need to change the way the function works you now have two functions to maintain instead of just one.
One key thing the author started to allude to: the value of functions - pure functions, vs a cyclic graph of mutable objects with temporal coupling of state changes and spaghetti inheritance out the wazoo.<p>OK, he didn't say <i>that</i> exactly, but he really did start heading that way.<p>The "static types will save us crowd" is drowning out a lot of the rest of the discussions that need to happen.<p>* Somebody mentioned Eiffel in other comments. Eiffel's design-by-contract assertions about invariants are vastly preferable to Java's magic beans that eventually (maybe) reach a valid useful state. I guess unit tests <i>sort of</i> compensate for this, but not really.<p>* Mutable state needs to be pushed to the peripheries of our apps, not plastered everywhere.<p>* Inheritance might not be such a good idea, due to reasoning about which code runs when you call a method / send a message. Do like Go-lang, and have interfaces for polymorphism, but skip inheritance - the use of an interface is a flag for later binding. Likewise, higher order functions and closures help you reuse code without resorting to spaghetti inheritance.<p>* We need languages that make programming with types easy, using type inference where it is clear to do so (e.g. - local identifier initialization, but perhaps not for multi-line function headers?). BUT, we still need to allow for dynamic runtime types, perhaps with a few minimal flags on identifiers/modules, rather than forcing people into monstrous reflection frameworks and XML situps.<p>FWIW, I agree (with the paper) that feature/integration tests are very good, but unit tests are often a waste.
This is from 2014 and has already been discussed (~300 comments on HN): <a href="https://news.ycombinator.com/item?id=7353767" rel="nofollow">https://news.ycombinator.com/item?id=7353767</a><p>See also: <a href="https://henrikwarne.com/2014/09/04/a-response-to-why-most-unit-testing-is-waste/" rel="nofollow">https://henrikwarne.com/2014/09/04/a-response-to-why-most-un...</a>
Unit tests enforce good design and naturally provide decent documentation on how components fit together. Unit tests also allow you to spend a bit more time on the details of each component and find issues you'd otherwise find in production. If nothing else at least have some integration tests so that you can confidently refactor knowing that you're not breaking existing functionality somewhere.
The obsession with information theory here seems like a classic nail-hammer thing. The number of bits my tests convey is totally useless to think about and certainly not worth spending pages on. All I want from my tests from a code base I maintain for thousands of patches is a tiny fraction of a bit: did my latest change break an important behavior or invariant encoded in a unit test? If I only screw up once in every 100 patches, then formally, my unit tests are doing all that work to emit 0.01 bits of information (-log(99/100)), which is formally a totally irrelevant thing to know about my unit testing framework. ('Hey Joe, what have you been up to?' 'Fixing my unit testing framework - I'm up to 0.03 bits per patch!' 'I see.')
In summary with my comments in brackets:<p>1. Keep regression tests around for up to a year - but most of those will be system-level tests rather than unit tests. [System-level since context of how software was being used is important.]<p>2. Keep unit tests that test key algorithms for which there is a broad, formal, independent oracle of correctness, and for which there is ascribable business value.<p>3. Except for No 2, if X has business value and you can use a system or unit test use a system test for X. [Context again.]<p>4. Design a test with more care than you design the code. [They need to do something meaningful rather than just be there as a metric.]<p>5. Turn most unit tests into assertions. [Assertions describe properties you code should always have before, during, or after executions. Just use assertions for such checks that instead of hiding them in tests.]<p>6.Throw away tests that haven’t failed in a year. [Controversial claim. Says they tell you nothing. I believe the author thinks the design and assertions should make the software right from the beginning with the tests telling you where you're screwing up.]<p>7. Testing can't replace good development: a high test failure rare suggests you should shorten development intervals, perhaps radically, and make sure your architecture and design regimens have teeth.<p>8. If you find that individual functions being tested are trivial, double-check the way you incentivize developers performance. Rewarding coverage or some other meaningless metrics can lead to rapid architectural decay.<p>9. Be humble about what tests can achieve. Tests don't improve quality: developers do.<p>There's a lot of good points in there, though. Talks about combinatorial explosion where tests often don't even measure correctness. Talks about how maintenance burden goes up over just good system testing and assertions with minimal unit testing. Meantions Toyota Production System showed you keep human in the loop and let them do any analysis that requires brains while automating just the mundane stuff. Mentions how hardware engineers duing Design-for-Testing philosophy embed little probes in most of their blocks to catch violation of correctness conditions during testing. Kind of black box plus deep, white box. Says that systems could similarly be wired so the system tests would set off alarms when they should. Just a few gems I saw.<p>I agree a shorter version of this essay would be beneficial.
I think it would be worth considering that we are long ways from the punch card days.<p>There are programming languages, frameworks and tools which would be unlikely to exist without "advances" such as TDD.
im a fan of built in tests/traps, like assertions that makes bugs "explode": file x, line 22 did not expect state x to be y. Call Allen. When discovered, make a test that repeats the steps that lead up to the error.
Totally agree 100%.<p>Everyone please stop testing your code.<p>...<p>...<p>...<p>...<p>Ok folks - everyone who's still testing can keep their jobs (or take the jobs of those who stopped).
Did my first big project with >200 Unit-Test, like we learned at university, haha.<p>Well, then requirements changed, and most of them started failing. In the end I spent most of my time fixing the now wrong tests.<p>On the other hand, stability goes down if I don't do any testing.<p>At the moment I do automated UI testing. I'm a front-end dev so this seems to catch many things, especially thanks to test-videos and screen-shoots.<p>I'll try TypeScript in my next production project, simply because I don't know how to write the right amount of good unit-tests. But I know that TypeScript doesn't prevent all bugs, so I guess I'll have a blind spot between the type-checker and the UI tests, but hopefully it won't be too big :\