As someone who worked somewhere where 100% code coverage was practically required, while it does point out flaws in your testing where you may have "missed a spot." It still doesn't prove that you're testing everything. You never know if a library function you are calling beneath your code has a different path it can also follow.<p>> I believe the only level of test coverage that's worth it is 100% of the lines you want to be covered.<p>This statement I vehemently disagree with. It seems like the kind of thing someone would hit you with as an excuse for why testing isn't useful at all because it's not perfect. I have found in my experience that simple end to end tests give me the best bang for my buck, and even a test suite that covers 80% of the code is pretty good.<p>To paraphrase from the Google code reviewer guide: No code is perfect, just try to make it better than it was.<p>100% test coverage doesn't tell you if you have race conditions, security problems, or networking issues. 100% test coverage doesn't matter much in distributed systems, where you can individually test each component to 100% but still have the system not work as a whole. 100% coverage doesn't mean you're checking that all your returned results are verified and what they should be.<p>> The problem with having a goal that is not 100% is that it leaves room for interpretation and negotiation.<p>By handing out edicts like these, you're taking away tools like negotiation and common sense and replacing them with blanket rules. Time is a finite resource and test coverage is only one small part of testing. Going from 95% to 100% might take more time and provide less value than other kinds of testing and other concepts (like UX, or market fit, where you test that what you're building is usable and useful).<p>Just because you answer all the questions right on the test doesn't mean you have perfect knowledge of the situation. Thinking such is hubris and leads to errors/defects.<p>By excluding lines and saying only make sure you cover what you're testing you're intentionally making blind spots. Just let the coverage number be what it is. Look at your coverage reports. Check the coverage of key components.<p>> I think it is a perfectly fine decision to exclude some code from coverage if the tests are too expensive to write (because of the required setup for instance).<p>No, no, no, no, no x a million. This is the fallacy that if you unit test each component, then when you put it together it's perfect because each part is perfect. You still have to do integration tests and complicated setup tests. If a test takes a long time, try to make it shorter, don't try to avoid testing it. Anything you avoid testing is where bugs will nest and code will churn.<p>To sum up, most of the important, complicated, and hairy bugs are not found by unit testing. It is the simple bugs that are found by unit testing. These are important to get out of the way because a system with a thousand cuts can still kill you, but in no way is 100% coverage and tests passing mean anything other than 100% coverage and tests passing.