'm considering dumping most unit tests from our app, because they almost never make a difference, they take a lot of work, our team is super tiny, and that work could be invested in features instead.<p>What I'd like to do instead is integration tests, since these would be much less work to maintain.<p>Can you convince me to not do this?<p>I realize that what I'm proposing is against the common wisdom, on the order of using tables for layout or spacer gifs.
I've seen many places where unit tests are almost useless because they test the obvious stuff, so the developers think it's good enough ("we have 550 tests!"), but the most difficult bits are untested.<p>If you want to get rid of them you could go with the following strategy:<p><pre><code> 1. Have code coverage (you need this regardless). Realistically you want to see branch coverage in addition to line coverage.
2. Implement an alternative test suite along the lines you suggested (integration tests)
3. Compare the coverage between the two test suites
</code></pre>
You should quickly approach the coverage of the original test suite with your integration tests. At that point, you can decide to either refine the integration tests to cover more, or write unit tests to cover the missing bits.<p>Generally you'll know already which parts of your code really need high branch coverage. Typically, the core data structures and algorithms. It's wise to focus on unit-testing those ones.<p>By the way I don't think you're doing anything wrong. Quite the opposite. Very few projects have comprehensive tests, so you have to find a tradeoff anyway. Many people choose the easiest path of cargo culting some unit tests for the sake of having many green bubbles in some CI panel. But if you can't have comprehensive tests, due to constraints, you might as well focus on trying to be effective, as you're trying to do. Godspeed.<p>edit: if you don't do it already, have assertions in your code. I find them essential, and they help decouple the tests from the code under test a lot.
I've seen unit tests that create a bunch of mocks and stubs and test that the stubs are getting called a certain number of times given the initial assumptions. At this point, the tests were like a copy of the original method, such that if you wanted to make a change, you would change the original method and then make updates to the test so that they'd sync up. Yes, this is how most testing <i>is</i>, but it felt different in that you weren't just updating expected output, you were updating multiple calls.<p>This was stupid. It looked and felt like nonsense and gave me no confidence that the tests would catch an error beforehand, they seemed to just catch that the code changed. (I'm curious if there's a name for this pattern of testing.) Bugs still made it to production, past these tests. This was a form of technical debt.<p>So if you have tests like these, if this habit of "unit" "testing" has somehow proliferated your code, I think you should kill that with fire and replace those tests with something sane.<p>Of course, there are certain central functions that I think need to get unit tested ten times over for every single possible edge case. I can't describe how you know what these functions are, but I'm sure some smart person out there can. Any 'utility' function that isn't strictly tied to an object or model or business logic (e.g. you made your own string concat method for some reason) needs to get tested outside of the context of integration tests.<p>Anyway, good luck. I've seen the internal debate of unit vs. integration tests more than once. When this has come up, it's because the org has allowed shitty testing habits to exist (in the form of technical debt, maybe like what I described above, or something else) in the first place, and instead of fixing the habits, "throwing the baby out with the bathwater" is the solution.<p>You're not paying down technical debt by getting rid of the tests, you're just creating a different kind by having integration tests that will not catch edge-cases, which will eventually come up later in time.
You mention removing "most" unit tests from the app, not all of them. Does that mean that you find some of them valuable and some not? This might suggest that perhaps you feel like you are maintaining too many unit tests, a little different perspective than dumping them all. Adapting to a more effective/useful/manageable amount of unit testing seems like a reasonable goal.
Can you describe the tests you have in more detail? What do they test (e.g., is view code being tested), and how do they deal with difficult cases (e.g., situations in which one might be tempted to reach for a mock or stub)? Things like take make a big difference.