I've found that software written in C is often the best written and most stable software I've used. I believe it's due to the amount of care required in writing in the language and the fact you can't stream code out as fast as you can in other, 'safer' languages. Overall though, I think the most important thing is to have skilled developers who understand the tools, libraries and languages that they're using.
> Overall though, I think the most important thing is to have skilled developers who understand the tools, libraries and languages that they're using.
I work with C every day as an embedded developer. I've done a lot of safety critical work. By far the worst aspect of safety critical development is the complete inadequacy of many programmers who work on it.
The level of complexity that shows up in these system scares me. Even when introducing languages like Ada, people find a way to abuse them. Budgets get tight, schedules slip, and verification gets lax. These programmers are then the only people capable of dealing with the massively complex system they've built and the cycle repeats itself.
Ada's a great language, but it's not a panacea. I'm working on a language for embedded systems as well, but it's not going to ever fix the 'bad programmer' problem. The best I can hope to do is find ways to reduce the complexity of these systems through language features.
> The level of complexity that shows up in these system scares me. Even when introducing languages like Ada, people find a way to abuse them. Budgets get tight, schedules slip, and verification gets lax.
That is the absolute major issue with doing safety-critical development right: it has a cost, that cost is very, very high, and few want to pay it.
One of the few groups I'm aware of which does pay it is the (now defunct, I guess) software shuttle group. Their work was expensive, it was process-heavy (the 1997 story on them quoted 2500 pages of spec for GPS integration which ended up totaling 6.3kloc change to the source, 1.5% of it) but they delivered exactly what they were set to: critically safe code (in fact, if I remember correctly the Shuttle software group is the only area of NASA Feynman praised in his Challenger report)
> That is the absolute major issue with doing safety-critical development right: it has a cost, that cost is very, very high, and few want to pay it.
This is absolutely true. The costs to manage complexity now seem high when software is being built. The costs of managing that complexity in the future are far higher and may have to be paid in lives.
I wonder whether it would be possible to make a lint-like verifier which rejected code whose local complexity exceeded some objective thresholds. I imagine it would be pretty easy to come up with an objective measure of pure control flow complexity, but the complexity of interactions between control and data would be harder to quantify.
At work I set up one of our project's automated build to fail if the cyclomatic complexity of a method exceeds 20. It's a C#/.NET project so we use NCover as part of the automated build. NCover has a reporting tool that you can set to return a non-zero return value (which will fail most build systems) if it's metrics exceed some value -- in this case class-level cyclomatic complexity exceeding 20.
I've thought about ratcheting it down to 15 based on stuff I've read, but unfortunately that particular failure metric can only be set as low as the class-level in the latest version. The next version (4.0) will have the ability to set the threshold at the method level.
In any case, I'd rather have a handful of places in the code where someone has to do something goofy to work around that metric rather than accidentally allow the whole project's complexity to creep up as time goes on.
Interesting idea. Instead of aiming for something hard and true, you can probably find a good proxy of complexity fairly easily. Once you have it, just don't confuse the map with the territory; but use it as a hint to see what code to investigate.
Global complexity might not be much harder to measure than local complexity, perhaps even easier. Look for interdependencies.
gives me all functions that have a cyclomatic complexity over 10; I often add it as a target in a Makefile, so I can easily check this. Similar, for functions of a certain length.
But in some cases, you simply can't use something like C. A friend of mine is currently trying to write a microkernel and mathematically prove that it's 100% bug-free (similar works on the L4 microkernel: [1] and [2], though they might not be the best articles on this matter as I just found these links with a quick Google search). He uses Haskell and ML for this project (He'd use Ada, but he already knows some Haskell and ML and doesn't want to re-learn everything).
The target code for the seL4 project is actually written in C. They do however use Haskell as a stepping stone (they have an abstract spec of their system, which they then prove is implemented by a Haskell implementation, which is in turn implemented by the C implementation).
Can your friend prove the Haskell and ML compilers he is using are 100% bug free as well? And the hardware it is running on? (both the compiler and the execution environment if they differ)
If you're interested, the German Verisoft project [1] aims at such pervasive verification. AFAIK they haven't yet verified anything matching the complexity of seL4, but they did verify a simple real-time OS. Of course you do need to stop somewhere, as in trusting something "blindly" (say, the verification tools themselves and the hardware they run on).
I think it'd be a hard sell to use ML in safety-critical systems due to the non-deterministic runtime of the GC and potential for memory exhaustion.
As a counter-example, C doesn't prevent formal methods - FWIW VxWorks puts out Arinc 653 and MILS products that are verified* using formal methods.
* - This is carries the following caveat, I believe the core OS is formally verified but the particular BSP/configuration needs be tested/certified by the customer or at the customer's expense.
However, there is a lot of work on generating C code from formal models in Haskell or ML. Verify the high level code; verify the compiler; target a subset of C whose semantics you understand. See e.g. L4; Lustre; Copilot.
Being non-deterministic is a non-starter for safety or mission critical systems.
One of the foundational concepts of safety-critical design is strict scheduling of resources. The schedule must be strictly deterministic. The process must get it's work done in the time allocated because the scheduler will move on regardless. It's why you don't do recursion in safety-critical systems. Strictly speaking, you're not even supposed to use while loops in safety-critical systems. In the five years I worked in avionics, the only time I saw a while loop was in code review where the person who wrote it (always a new person) was directed to take it out.
All that being said, safety-critical software production is more about the processes than coding. Even crappy programmers can be taught how to write good safety-critical code when the company adheres to best practices.
As far as the original post, Ada and C. Ada has tremendous support for concurrency and is stable as the day is long. Not much in the way of libraries, but honestly not that important on the kinds of projects you would use Ada to build.
> In the five years I worked in avionics, the only time I saw a while loop was in code review where the person who wrote it (always a new person) was directed to take it out.
I can understand the concern about loop termination, but what is the alternative? Are for loops without a hardcoded upper limit allowed? Or must all loops be unrolled?
IIRC type systems based on linear types are quite promising for making a version of ML without a garbage collector --- there could be some interesting progress here soon!
The article points out great benefits of using C in safety critical systems. However, as an embedded dev for 5 years... I'd say the advantage is the same as the disadvantage. Nitty gritty bit level memory manipulation and management can be excellent. However, C may also take your measure and find you wanting. In this case the resulting code will be a disaster. I've written both disastrous code and then grew and wrote some great code in C.