A couple things:<p>- Competitors with Coverity are CodeSonar[1] and Klocwork[2]. I've not seen Klocwork output, but CodeSonar and Coverity are in the same area of quality, with differing strengths. I can not recommend static analysis highly enough if you have a C/C++/Java/C# database. It's very expensive (well into five figures according to Carmack), but how expensive is a bug? What if you have your entire codebase checked daily for bugs? Consider the effect on your quality culture. :-)<p>- The fact that you are paying "well into five figures" for a tool that essentially covers up design deficiencies in your language should start sounding alarm bells in your head. The proposition more or less goes, "To have reliable C++ code in certain areas, you need a static analyzer; to gain that same advantage in Haskell costs you nothing more than GHC". Of course Haskell doesn't have certain C/C++ capabilities; but it's worth meditating on for your next application, particularly if bugs are more important than performance. N.b- I don't know the ML family enough to say one way or the other in this regard. :-)<p>[1] <a href="http://www.grammatech.com" rel="nofollow">http://www.grammatech.com</a><p>[2] <a href="http://www.klocwork.com" rel="nofollow">http://www.klocwork.com</a>
So I've dealt with dozens of Fortune-100 companies implementing and using static code analysis tools. They can and will help but in general I feel that these tools are not much more than the code-equivalent of the syntax- and grammar- checker in your word processing software.<p>I've been doing manual code reviews for a living now (mostly security related) for roughly 3 years now and while I get assisted from time to time by code analysis tools I still find heaps of bugs not caught by any of the tools mentioned by Carmack. The biggest issue for a development shop is to properly integrate these tools and to not overwhelm developers with too much false positives.<p>I've had cases where a developer got a 1500 page PDF spit out by one of these static analysis tools. After spending two weeks going through everything the developer ended up with 50 pages of actual bugs; the rest were describing false positives. Then I got on-site and I still logged dozens and dozens of security-related bugs that the static analysis tools failed to find.<p>Edit: also consider that one even needs a SAT solver to even do proper C-style preprocessor dependency checking. A lot of these code analysis tools are being run on debug builds only and then there when the release build is being made these tools are not being run meaning they fail to catch a lot of issues. It's insanely hard to write proper code analysis tools and static source code analysis tools which do not integrate with the compilation process I wouldn't trust at all.<p>Nowadays with clang there are very nice possibilities for someone to write your own simple checks and integrate them into the build process. But even clang doesn't expose everything about the preprocessor that you might want to have from a static code analysis perspective.
<i>> We had a period where one of the projects accidentally got the static analysis option turned off for a few months, and when I noticed and re-enabled it, there were piles of new errors that had been introduced in the interim. Similarly, programmers working just on the PC or PS3 would check in faulty code and not realize it until they got a “broken 360 build” email report. These were demonstrations that the normal development operations were continuously producing these classes of errors, and /analyze was effectively shielding us from a lot of them.</i><p>Something which corroborates this: When penetration testers break into systems, they're often using new 0-day exploits. Think about that. Most of today's software development practice produces such a steady stream of low-level bugs, that penetration testers can <i>assume</i> that they're there!<p><i>> Trying to retrofit a substantial codebase to be clean at maximum levels in PC-Lint is probably futile. I did some “green field” programming where I slavishly made every picky lint comment go away, but it is more of an adjustment than most experienced C/C++ programmers are going to want to make. I still need to spend some time trying to determine the right set of warnings to enable to let us get the most benefit from PC-Lint.</i><p>This could be encouraged using game dynamics. Have a mechanism where a programmer can mark parts of the codebase "green-field." A programmer's "green-field score" consists of the number of lines of green-field code (or statements, whichever lousy metric you want) that he's successfully compiled with no warnings whatsoever. Combine this with random sampling code walkthroughs, which has many benefits but will also catch boilerplate, auto-generated, or copy-paste programming by a "Wally" who's trying to "write himself a new minivan."
Having used PC-Lint almost all the way back to it's origins, I can testify to just how scary it is to run this on your code. Code you wrote as well as code written by teammates. In self defense, you HAVE to spend time tuning the system in terms of warnings and errors---otherwise you drown in a sea of depressing information. I liked John's comment about attempting 'green field' coding. It is a tremendously valuable process given the time. Great article, definite thumbs up.
I've been wanting something like this for Ruby for some time now. Since it's dynamically typed and ridiculously easy to monkey-patch, Ruby is a much harder challenge than C++. The two best efforts I
have found are Diamondback Ruby (<a href="http://www.cs.umd.edu/projects/PL/druby/" rel="nofollow">http://www.cs.umd.edu/projects/PL/druby/</a>) and Laser (<a href="http://carboni.ca/projects/p/laser" rel="nofollow">http://carboni.ca/projects/p/laser</a>)...but they mostly try to add static type-checking to Ruby code. After
looking at these I implemented a contracts library for Ruby (<a href="https://github.com/egonSchiele/contracts.ruby" rel="nofollow">https://github.com/egonSchiele/contracts.ruby</a>) to get myself some better dynamic checking. The next step is to use the annotations
for the contracts library to do better static code analysis. One thing I'm working on is generating tests automatically based on the contract annotations. But I've got a long way to go : ( If anyone
knows about other projects that are working on static analysis for Ruby I'd be very interested in hearing about them!
The article mirrors my recent experience 100%. We've got a Coverity license and I've started using it recently. Luckily, our code base is relatively small, it's straight C and embedded (no mallocs, no OS). Even in this extremely simple environment it's shocking how many errors Coverity can ferret out.<p>The false-positives are a problem and the general advice to get started is to initially ignore all existing bugs and focus on avoiding adding new bugs. Then, when you get the hang of writing code that passes the checks you go back and look for the worst of the older bugs, etc.
It's a great article by an insightful individual.<p>If you haven't read it, do so.<p>You can read further discussion on this 270 days old article at <a href="http://news.ycombinator.com/item?id=3388290" rel="nofollow">http://news.ycombinator.com/item?id=3388290</a>
FindBugs [1] is a great code analysis tool for Java. It's free, open source, and supports plugins for writing your own checks. The FindBugs site reports an interesting story from a Google test day:<p>"Google held a global "fixit" day using UMD's FindBugs static analysis tool for finding coding mistakes in Java software. More than 700 engineers ran FindBugs from dozens of offices.<p>Engineers have already submitted changes that made more than 1,100 of the 3,800 issues go away. Engineers filed more than 1,700 bug reports, of which 600 have already been marked as fixed. Work continues on addressing the issues raised by the fixit, and on supporting the integration of FindBugs into the software development process at Google."<p>[1] <a href="http://findbugs.sourceforge.net/" rel="nofollow">http://findbugs.sourceforge.net/</a>
I'm a web developer interested in diving into graphics programming sometime in the next year, but this made me stop and wonder:<p>> If you aren’t deeply frightened about all the additional issues raised by concurrency, you aren’t thinking about it hard enough.<p>Why exactly is that?
This article is one of the reason I created my tool: <a href="http://www.qamine.com" rel="nofollow">http://www.qamine.com</a><p>Qamine integrates directly with github and is designed to be used by small and medium companies that cannot afford those expensive tools.
Dear Lazyweb,<p>the `Controler' part of my main codebase consists of interwoven PHP and MySQL. Is there static analysis tool that understands both, one in relation to the other?
For C/C++, also try just compiling with clang. It has great diagnostics. Also it has the static analyer whose C++ support just improved greatly in trunk.
Funny timing, I just got jslint turned back on in our build today! (well, jsHint now due to the 'for(var i=0...' failing even with -vars enabled, but I digress...).<p>Another dev and I spent literally the entire day fixing issues - and we had jslint running on every checkin until a few months ago!<p>But, it was worth it. It feels great to know that those bugs won't happen again without a failing build :)
<i>There have been plenty of hugely successful and highly regarded titles that were filled with bugs and crashed a lot</i><p>I think it's false, and a huge mistake. There is rare cases, but not plenty. Video games (I mean video games for core gamers) are products that demand <i>first</i> and <i>above all</i> quality to be successful. There is rather plenty of common games with a huge quality which become hits (Kingdom Rush for example or Starcraft which was finally kind of common in its time). One of the rules in the delovepment process at Blizzard is that they ship a game when it has less than 100 known bugs. Also, I would add that, it seems that, ID software did not make a successful games since Doom. Quake wasn't a commercial success, Quake2, Quake3, Doom3 and Rage neither (that's why ID software has been bought for <i>only</i> $100M). After all, ID Software lost one of its core value co-founder a long time ago (John Romero) who was responsible of the gameplay of ID's games...<p>Quality in video games are everything, that's really my opinion. It's also really an edge for every indy developper which want to start a company in this sector, cases are countless.