One thing that is rarely discussed (I think?) is how to test things which don't have a correct answer. It's not just "refactor until you can test" it's output that may be subjective. For example, suppose you write some code to do some image processing like a stereo matcher. How do you check your code works? Usually you have some ground truth which you can compare, but it's difficult because you'll never get 100% accuracy. At best you can declare a baseline, eg that your algo should be say 90% accurate (if you implemented it properly based on literature results) and if you don't get that, then error. In that case you can use a numerical metric, but other applications you might care about the result being aesthetically pleasing (eg a video ISP where you do colour correction on the stream coming from a low level camera).<p>Or hardware where the advice is usually to mock the device under test. But if you don't own the hardware the most you can do is try and emulate it, and maybe check that your simulated state machine works. In my experience its easier to run with hardware connected and just skip those tests otherwise. There are also extremely subtle bugs that can crop up with hardware interfaces like needing to insert delays into code (eg when sending serial) that will otherwise fail in the real world.<p>OpenCV has some interesting approaches to this, for example testing storing a video in a certain format, inserting a frame with a known shape (like a circle), then reading back the video and checking that the shape can be detected.