I agree with almost all of it except for one thing: gotos have pretty much one legitimate use, as C's equiv to finally {} as part of a try {} block, ie, that specific form of cleanup after error management. NASA implies they have no legitimate use.<p>longjmp banning is also slightly questionable (although I can see why because it is very easy to do wrong). I use it inside of my code as part of an STM implementation (so begin_tx() setjmps[1], abort_tx() longjmps; its faster than manually unwinding with if(tx error) { return; } spam in deep call stacks.)<p>Using longjmp for this makes writing code much easier (no needing to error check every single tx function call), so less chance for bugs to slip in.<p>1: The only ugly part of that is begin_tx() is a function macro, which I prefer never to use in code that is executed; I tolerate it in "fancy template-like generator" setups, though.
A few summers ago I was an intern at JPL working on a static analysis suite for this exact standard.<p>Writing code checkers for these sorts of rules is a really interesting exercise and it helped me grow a lot as a programmer! I went from having no exposure to formal languages, parsing, and grammars to actively playing around with these concepts to try and help build more reliable software. It was a humbling, challenging, and incredibly rewarding experience.<p>Sometimes, a rule is extremely simple to implement. For example, checking a rule that requires that an assert is raised after every so many lines within a given scope is just a matter of picking the right sed expression. Other times, you really need an AST to be able to do anything at all.<p>A rule like "In compound expressions with multiple sub-expressions the intended
order of evaluation shall be made explicit with parentheses" is particularly challenging. I spent a few weeks on this rule! I was banging my head, trying to learn the fundamentals of parsing languages, spending my hours diving into wikipedia articles and learning lex and yacc. The grad students at LaRS were always extremely helpful and were always willing to help tutor me and teach me what I needed to learn (hi mihai and cheng if you're reading!). After consulting them and scratching our heads for a while, we figured we might be able to do it with a shift-reduce parser when a shift or reduce ambiguity is introduced during the course of parsing a source code file. This proved beyond the scope of what I'd be able to do within an internship, but it helped me appreciate the nuance and complexity hidden within even seemingly simple statements about language properties.<p>Automated analysis of these rules gives you a really good appreciation of the Chomsky language hierarchy because the goal is always to create the simplest possible checker you can reliably show is able to accurately cover all the possible cases. Sometimes that is simple as a regular language, but the next rule might require you to have a parser for the language.<p>For what it's worth, this is only one of the ways the guys at LaRS (<a href="http://lars-lab.jpl.nasa.gov/" rel="nofollow">http://lars-lab.jpl.nasa.gov/</a>) help try to improve software reliability on-lab. Most of the members are world-class experts in formal verification analysis and try to integrate their knowledge with missions as effectively as possible. Sometimes, this means riding the dual responsibility of functioning as a researcher and a embedded flight software engineer, working alongside the rest of the team.<p>If anyone's interested in trying out static analysis of C on your own, I highly reccomend checking out Eli Bendersky's awesome C parser for Python (<a href="http://code.google.com/p/pycparser/" rel="nofollow">http://code.google.com/p/pycparser/</a>). I found it leaps and bounds better than the existing closed-source toolsets we had licenses for, like Coverity Extend. At the time, it had the extremely horrible limitation of only parsing ANSI 89, but Eli has since improved the parser to have ANSI 99 compliance. Analyzing C in Python is a dream.
Oh if only the project that I had been working on followed any of these rules. Most of the code was generated from Matlab, but some had to be translated by hand. I'm not sure any of us knew this even existed...
Wait.... no malloc or sbrk? That means all space has to be stack allocated? That's a pretty serious limitation and would probably make it hard to do anything really interesting.
"A recommended use of assertions is to follow the following
pattern:"
if (!c_assert(p >= 0) == true) {
return ERROR;
}<p>Why not:
if (!c_assert(p >= 0)) {
return ERROR;
}
This document reinforces my opinion that most coding standard documents suck. I’ve seen a countless number of coding standards from different companies (some of the companies I even worked for) and they all sucked. No exception. Even though coding standards have some common sense advice and guidelines which is generally helpful for producing code of good quality, the amount of arbitrary irrational rules and beliefs that coding standards writers put into the standards and try to enforce through the standards actually end up hurting the quality of the code produced by developers trying to follow those rules.<p>Case in point with examples from the NASA JPL coding standards for C:<p>* no direct or indirect recursion
What is it, FORTRAN-77? Some algorithms are way easier to implement recursively whereas the iterative algorithm can be much less straightforward and buggier. Think sorting: it’s easy to prove that the recursion is finite and that the implementation of the algorithm is correct. Do they use sorting in NASA or is it prohibited by this rule?<p>* no dynamic memory after initialization
FORTRAN-77 again! While dynamic memory management can be challenging in real-time systems and the generic malloc/free implementation is not acceptable, it doesn’t mean that statically pre-allocated fixed-size memory is better. It inevitably leads to brittle code ripe with excessive memory use, bugs like static buffer overruns, and sometimes even inability to use dynamic data structures like linked lists. To work around this restriction, a developer can construct a linked list structure in a statically allocated memory, but doing so is essentially equivalent to creating your own dynamic memory manager which is more likely to be poorly implemented than a good dynamic memory manager. Instead of denying the use of dynamic memory they should develop memory managers with acceptable performance characteristics.<p>* The return value of non-void functions shall be checked or used by each calling function, or explicitly cast to (void) if irrelevant.
Given that there are a lot of library functions in C that return some error code rarely useful, this rule leads to code littered with (void) casts: “(void) printf(…)”, “(void) close(…)”, etc. Along with the littering the rule doesn’t make the code any more robust because it encourages to use (void) casts to ignore error codes and therefore error codes will likely be ignored rather than handled correctly.<p>* All functions of more than 10 lines should have at least one assertion.
This leads to littering code with assertions in those functions that don’t necessarily have anything to assert and that are accidentally longer than 10 lines (for example, due to mandatory parameter validation checks. I hope parameter validation checks are not assertions, are they?).<p>* All #else, #elif and #endif preprocessor directives shall reside in the same file as the #if or #ifdef directive to which they are related.
This is just a bizarre rule. What developer puts #ifdef in one file and #endif in another? Unless of course he’s drunk or high but I hope that’s not how NASA develops its software.<p>* Conversions shall not be performed between a pointer to a function and any type other than an integral type.
Wait, pointers to functions should be converted to which integral type? They are a number of integral types: char, short, unsigned long long. Which one do I choose? Why not void* or intptr_t?<p>* Functions should be no longer than 60 lines of text and define no more than 6 parameters.
Finally a good rule. But what does the explanation say? “A function should not be longer than what can be printed on a single sheet of paper in a standard reference format with one line per statement and one line per declaration.” Printed on a sheet of paper? Is this still how code is reviewed in NASA?<p>And before you say "these coding standards are for a special kind of software that runs on space flight control systems," embedded devices these days are more powerful than desktop computers ten years ago. Embedded sortware grew beyond draconian restrictions a long time ago and it's much closer now to non-embedded software.<p>Let's not forget that NASA did use Lisp in their systems and they were able to solve pretty difficult problems remotely with help of Lisp REPL (<a href="http://www.flownet.com/gat/jpl-lisp.html" rel="nofollow">http://www.flownet.com/gat/jpl-lisp.html</a>). Lisp code certainly can't be subject to any of the restrictions from these coding standards, which is another indication of how irrelevant these coding standards are for producing robust software.