I just dislike tests that don't test what's actually important.<p>For instance, if you are writing an application, then you probably don't need to be writing unit tests for internal APIs, since they might as well be considered private entities. Just write tests for application behavior. This stance seems rare, in my experience. Every team I've worked with insists on testing effectively private logic as well as testing the application at a high level, which takes a lot of time.<p>A lot of debugging time can be saved by guarding against unusual circumstances and providing useful error messages. Unfortunately, most people treat tests as if they <i>are</i> the documentation for how parts of the application are supposed to behave, which I think is generally the wrong way to look at it since tests can be difficult to decipher when you dive back in to them.<p>The greatest sin of testing, in my opinion, is the idea that if you write enough tests that you can avoid errors. The only way in which this works in some capacity is when you write your tests while you code(aka TDD). But the reality is that you simply can't avoid bugs no matter how meticulous you are in writing your tests. Every application I've worked on has ass loads of tests, and yet there are regressions every week. This is not the fault of anyone in particular, but the nature of the beast. In which case, you've got to decide whether it's worth testing all the minutiae of your application, or spend more time on the stuff that you really care about. Every test you add contributes to wasted developer time, especially when it gets to the point that it's no longer practical for programmers to run your entire test suite locally.<p>EDIT: To expand upon this, while I think integration tests are <i>better</i> than unit tests, generally speaking, I don't think they a good substitute for high level <i>application</i> tests when writing an application(not an API, framework, or library).<p>Your best bet for testing the truly desired behavior, getting an understanding of how performant your application is, and measuring your app's complexity, is application tests. If your application test requires a ton of ridiculous special cases to be set up, or faking test data becomes difficult, that's a sign that your app is too complicated and that you should stop and address that before anything else. If your application tests are becoming slow, that means that the application will become slower for your <i>users</i>. Integration and unit tests are unlikely to capture how your app is <i>actually</i> going to behave or perform. If you TDD your application while writing application tests, you will quickly know whether your work is improving the app or making it worse.<p>Appliction tests, by design, are much slower than low-level tests. This is a <i>good</i> thing, because you'd better make good choices or else your test are going to take forever to run. Things like unit tests sweep performance problems under the rug.<p>The problem with good application tests is that adding them to an existing app that performs poorly takes a tremendous amount of effort. If your app is well established, has tons of lower-level tests, but runs like crap and has a bunch of over-complicated inner workings, then getting new application tests in there will be painful and might not even be worth the effort to the business. Sadly, so many apps I've worked on faced a great deal of rot, which I think is largely a result of flawed testing philosophy, that made it very difficult to improve.