<i>> We had a period where one of the projects accidentally got the static analysis option turned off for a few months, and when I noticed and re-enabled it, there were piles of new errors that had been introduced in the interim. Similarly, programmers working just on the PC or PS3 would check in faulty code and not realize it until they got a “broken 360 build” email report. These were demonstrations that the normal development operations were continuously producing these classes of errors, and /analyze was effectively shielding us from a lot of them.</i><p>Something which corroborates this: When penetration testers break into systems, they're often using new 0-day exploits. Think about that. Most of today's software development practice produces such a steady stream of low-level bugs, that penetration testers can <i>assume</i> that they're there!<p><i>> Trying to retrofit a substantial codebase to be clean at maximum levels in PC-Lint is probably futile. I did some “green field” programming where I slavishly made every picky lint comment go away, but it is more of an adjustment than most experienced C/C++ programmers are going to want to make. I still need to spend some time trying to determine the right set of warnings to enable to let us get the most benefit from PC-Lint.</i><p>This could be encouraged using game dynamics. Have a mechanism where a programmer can mark parts of the codebase "green-field." A programmer's "green-field score" consists of the number of lines of green-field code (or statements, whichever lousy metric you want) that he's successfully compiled with no warnings whatsoever. Combine this with random sampling code walkthroughs, which has many benefits but will also catch boilerplate, auto-generated, or copy-paste programming by a "Wally" who's trying to "write himself a new minivan."