I will need to learn about writing safety-critical C/C++ code at my current job. Many resources[1-2] tell you what not to do, but few tell you what to do[3].<p>What are some excellent examples of open source code bases from which to learn?<p>1: https://www.misra.org.uk/
2: https://yurichev.com/mirrors/C/JPL_Coding_Standard_C.pdf
3: https://nasa.github.io/fprime/UsersGuide/dev/code-style.html
First, stop saying C/C++. If you are talking about C, you are not talking about C++. If you are talking about C++, you are not talking about C.<p>Second, give up on C. It simply has not got the resources to help you with safety. It is a wholly lost cause.<p>In C++, you can package semantics in libraries in ways hard to misuse accidentally. In effect, your library provides the safety that Rust reserves to its compiler. C++ offers more power to the library writer than Rust offers. Use it!
CERT C is a really good standard and book. But there's really no reason to read a book. It's very simple if you follow these steps.<p>Step 1. NO CONSTANT NUMBERS! All constants should be a define macro or a constant. This will allow you to change code without overflows and having to update the number in 20 places and not knowing what number to use when looping through.<p>Step 2. SESE(RAII in c++, but most use SESE even in c++). SINGLE ENTRY SINGLE EXIT. Your code should look like<p>"
int *ptr = foo();
if(ptr == nullptr)
DEBUG_PRINT("FAILED ALLOCATING PTR IN __FILE__ @ __LINE__)
goto exit;<p>EXIT:
if(ptr)
free(ptr);
....
"<p>So any allocations you cleanup in exit. This way you won't miss it with wierd control flows. This is reccomended by all cert c standards.<p>Step 3:
If you can, there's analyzers you can use that will point out all bugs by annotating your code. SAL is arguable the best in the industry and you can catch pretty much all bugs.<p>Step 4:
Even without an analyzer, you should be looking at all warnings and either adding a compiler macro to ignore it, or fixing whats causing it.
ITT: People who don't understand safety-critical systems telling people how to write safety-critical systems.<p>The most popular answer in this thread is "you can only write safe C++" which is bullshit. The language that you use will likely be dictated by the toolchain you're forced to use to meet whatever standard your org has adopted. For example, if you're in the automotive realm and following something like ISO-26262, you'll only be able to use a qualified toolchain that's compatible with your safety MCU – so you'll likely be limited to C or C++, and then FURTHER limited by MISRA standards to a subset of those languages. There is no version of Rust that may be used for safety-critical systems, currently – despite the fact that it's arguably a better language, the rigorous verification/documentation work hasn't been done yet. If you're looking for an alternative to C or C++ for use in safety-critical domains, look at Ada.<p>You will likely not find any example of an open source codebase for safety critical systems. Rigorously-developed safety-critical systems cost millions of dollars to produce, document, run through V&V, etc. They don't tend to get released as OSS.<p>For the rest of the folks in this thread: type safety, memory safety, etc. are awesome features – but having a language with these features doesn't allow you to build a safety-critical system. It doesn't even begin to. If you're curious, you can start to look at the roadmap for the Ferrocene project – the company behind it is working with the folks from AdaCore (AFAICR?) to make a version of Rust for safety-critical systems a reality (one that I'm very much looking forward to!)
This book `Embracing Modern C++ safely` just showed up in my book feed, you may find it useful. [1] is a review of the book.<p>[1] <a href="https://www.cppstories.com/2022/embracing-modern-cpp-book/" rel="nofollow">https://www.cppstories.com/2022/embracing-modern-cpp-book/</a>
Find the industry standards you're supposed to follow. If your job requires safety compliant code, the company should have documents that give good style guides. As mentioned by other commenters, aviation has its own standards, and you linked to some of the NASA work.<p>In automotive, where I've done ISO26262 work (Functional Safety standards), there are MISRA and Cert C static checkers and guidelines to make them not scream too much, not to mention the fact that you'll be following the style of the code you modify. Beyond that, you can find the industry guidelines for whatever standards you're responsible to follow. It gets worse as you get more strict -- brake controller code in the safety critical path has to meet the strictest formal methods checking as well as a bunch of in-use, on-controller testing. Generally, no one gets thrown into that without any training on the grounds of safety and liability alone.
From Stroustrup himself (consulted on guidelines for the F-35). <a href="https://www.stroustrup.com/JSF-AV-rules.pdf" rel="nofollow">https://www.stroustrup.com/JSF-AV-rules.pdf</a><p>Maybe stricter than you're looking for, but no memory is allocated or deallocated after the plane takes off and until it lands!
Here is another one I came across: <a href="https://www.autosar.org/fileadmin/user_upload/standards/adaptive/17-03/AUTOSAR_RS_CPP14Guidelines.pdf" rel="nofollow">https://www.autosar.org/fileadmin/user_upload/standards/adap...</a>
Useful resources: colleagues, professional training, case studies of errors.<p>If your job is safety critical software I guess they'd pay for relevant training. If not, looking at the course outlines at least lets you know what trainers think are important topics, for example<p><a href="https://www.feabhas.com/content/robust-software-embedded-systems-1" rel="nofollow">https://www.feabhas.com/content/robust-software-embedded-sys...</a><p>One training course I had talked about how to design a system with integrity while integrating open source code of unknown integrity. Since software quality and safety critical software depends so much on process, then open source by default isn't built to any integrity level. If a system needs two independent implementations of a calculation, an open source code base would never show that.<p>If you have an experienced safety engineer, ask them about how typically to design the system and software to make the safety case easier and they'll have some ideas of what needs to commonly be done. It depends on the integrity level what strategy and process needs to be followed.<p>It's not just the code style, but there's a broader mindset that you need to develop.<p>There's also good presentations and lectures that come up from time to time here or on YouTube where the failure of safety critical software is studied. These can be excellent case studies: Such as:
<a href="https://news.ycombinator.com/item?id=31236303" rel="nofollow">https://news.ycombinator.com/item?id=31236303</a>
As others have mentioned start with identifing the relevant functional safety standards for your industry. IEC 61508-3 and the annexes, whilst very verbose, is basically the textbook for safety development.<p>Pro tip, standards can be hard to find and expensive but you can rent or buy them cheaply from the Latvian Standards website (<a href="https://www.lvs.lv/" rel="nofollow">https://www.lvs.lv/</a>), most are harmonised and exactly the same as IEC or ISO parent standards, just with an LVS cover sheet.<p>This book ,Embedded Software Development for Safety-Critical Systems by Chris Hobbs gives a great overview of safety software development in general and the key standards, I found it easy to read.<p><a href="https://www.routledge.com/Embedded-Software-Development-for-Safety-Critical-Systems-Second-Edition/Hobbs/p/book/9780367338855" rel="nofollow">https://www.routledge.com/Embedded-Software-Development-for-...</a><p>On a practical note if using C or C++ get familiar with commonly used language subsets such as MISRA (<a href="https://www.misra.org.uk" rel="nofollow">https://www.misra.org.uk</a>) or CERT C, again which is more relevant will depend on industry.<p>Gimpel's PC-Lint is a commonly used static analyser for MISRA compliance, and you can try with it on their website (<a href="https://gimpel.com/demo.html" rel="nofollow">https://gimpel.com/demo.html</a>), I haven't come across a free tool complete checker but you can do a lot with clang and GCC.<p>Some mention of Rust here but I think that would be a hard language to get through a certification process due to the limited options for qualified tools. That said there is work being done there, <a href="https://ferrous-systems.com/ferrocene" rel="nofollow">https://ferrous-systems.com/ferrocene</a>
SEI Cert C coding standard is still updated and has good advice <a href="https://wiki.sei.cmu.edu/confluence/plugins/servlet/mobile?contentId=87152044#content/view/87152044" rel="nofollow">https://wiki.sei.cmu.edu/confluence/plugins/servlet/mobile?c...</a>
Architect your system for handling failures. No software will be bug free, because the hardware you run it is not perfect and can introduce things like bit flips. It's okay to fail, but you need to be be able to recover.
i like this one: <a href="http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines" rel="nofollow">http://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines</a>
dwheeler.com and adacore.com are good places to look. Even though the latter is an Ada site, you can learn things from it. Why are you stuck using C and/or C++ anyway? And what is your application? That affects the answer.<p>I agree with the posters who emphasize that C and C++ are not similar languages and shouldn't be lumped together, fwiw.