Memory Issues: Episode III - Revenge Of The Buffer Overflow (2016)
In all seriousness, repeated occurrences such as these should make people consider other languages for new projects. Despite being quite a C fanboy, I have to admit that even I think manual memory management should be a thing of the past now.
I'd almost go so far as to say you need to do a security review before you write one line of code. You need to evaluate if C is a requirement or a preference, and if the security of the project can withstand being written in C (e.g. edge code that may be attacked).
The argument that C (and C++) can be written well or flawlessly is tiresome. "Perfect code" is a fallacy that has been proven over and over again to be untrue and dangerously untrue. You should assume the code quality is less than optimal as your starting point.
I am not promoting any specific alternative (Rust, Java, Go, etc) I am just saying that C/C++ are insecure by design and should be avoided unless they're a project requirement.
To be fair, most of the vulnerabilities found in C codebases are preventable with static analysis, bound checking, etc. If you are ok with it getting as slow as Go, you can get similar safety guarantees as well.
They and Cambridge CHERI team have hardware modifications, too, if you want to run C apps on FPGA CPU with inherent safety. Software-only, though, has significant performance penalties.
Specifically, Rust does runtime bounds checking when you index into an array. LLVM is theoretically capable of optimizing these checks out if they're unnecessary, but it's far from guaranteed. It would take a dependent type system to perform compile-time bounds checking, and Rust isn't that sophisticated.
However, indexing is far, far less prevalent in Rust than it is in C, due to the existence of iterators. I've asked the Servo team on multiple occasions whether the cost of bounds checking shows up in performance profiles, and they never have, likely because they just aren't doing very much indexing to begin with.
Since Rust does not provide sec-libs in standard distribution nor they are available in larger ecosystem, Rust benefits seems theoretical to me. As with other things in theory C/C++ code could also avoid memory/security issues.
I don't see any other way to read this other than "there's no point in rewriting security code in Rust because security code in Rust hasn't been written yet". That, obviously, doesn't make sense to me.
You do not see any other way to read because you are looking to argue over innocuous statement which is not even false. Security code in Rust could be great and I will be happy if and when it is available for general use.
I agree with pcwalton, that's how I read it too. The context of this thread is security vulnerabilities in OpenSSL, so it's quite reasonable to read your parent comment exactly as pcwalton did.
They do exist in the larger ecosystem, they are just very young. Also, we devs have had "don't build your own crypto" beaten into our heads so often lately that I think people are generally wary of starting new, or using new, projects that haven't already been battle hardened.
I've looked at the OpenSSL code, and I have to say that it's not well documented in many cases and easy to make mistakes in it's usage. The rules on when locks are needed vs. not are not always clear, and then the ownership of memory is as confusing as it's always been in C.
This is where something like Rust would shine just by cleaning up these interfaces. The Rust OpenSSL bindings, http://sfackler.github.io/rust-openssl/doc/v0.7.10/openssl/ , for example are very sane, and it's difficult to screw up the usage.
Do you mean Rust-openssl bindings will not require patching openssl for Rust apps in case vulnerabilty? In that case Rust surely helps applications written in it.
What are "sec-libs"? From the other reply it seems "sec-libs" are the hip mounted pouches of enchanted crypto dust so you can sprinkle it around and get "security".
> To be fair, most of the vulnerabilities found in C codebases are preventable with static analysis, bound checking, etc.
You can work in an ad-hoc dialect of C based on particular tools that will be a bit safer than standard C, sure (though probably still not as safe as an actually memory-safe language). But at that point most of the advantages of C no longer apply: you don't have a huge library ecosystem, you don't have a supply of experienced developers, you don't have a bunch of standard automated tools that work with your dialect.
> If you are ok with it getting as slow as Go, you can get similar safety guarantees as well.
There is no native/standard/supported-by-tools way to do tagged unions, so whatever you do C will always be less safe than languages that have native tagged unions.
I don't think "most of the advantages of C no longer apply [if you use static analysis or bounds checking]" is fair.
Even if your claims about losing libraries, experienced developers, or automated tools was true (which think is false), you still get portability, close-to-the-metal code, full control of your processes, direct access to the native ABI, full access to the platform facilities, fast compilation, small executables, standalone libraries, memory layout control, ...
C code is less portable than most of the alternatives. In the rare cases where the things you list are hard requirements there are still safer options e.g. Ada.
Plenty of alternatives are self-hosting, but that's neither here nor there. Even if it's a major effort to port e.g. the Python interpreter to a new platform (particularly the first new platform - once you have a C codebase that enforces portability over many platforms it's much easier to add a new one). But that major effort has by and large been done for even relatively obscure languages and relatively obscure platforms. Whereas if you write in C you get to do that major effort for yourself for each program.
More portable means "can be ported to new platforms more easily", not "has been ported to new platforms by someone else".
A C program (in general) is more portable than anything written on top of C because to port, say, Python to a new platform, one has to have C there first.
It has nothing to do with whether the work has been done by you or someone else.
Of course, one can write code in either language which is not portable, but that is also beside the point.
> A C program (in general) is more portable than anything written on top of C because to port, say, Python to a new platform, one has to have C there first.
Port C compiler -> port your C program vs port C compiler -> port Python interpreter -> port your Python program. The latter can still end up a lot easier and cheaper, because the Python interpreter is already multi-platform and Python programs tend not to have much if any platform-specific code by the nature of the language.
> It has nothing to do with whether the work has been done by you or someone else.
It has everything to do with that. Ultimately via Turing equivalence it's possible to port anything to anywhere, so when we're talking about "portability" we must be talking about how much it costs to port a program to a new or existing architecture.
> Of course, one can write code in either language which is not portable, but that is also beside the point.
Again, it's very much the point, because it can't possibly be a yes-or-no thing. How costly is typical/idiomatic C code to port? How costly is typical/idiomatic Python code? Those are the questions that matter when we talk about language portability.
By definition, the C language is more portable than the Python language, because in order to run Python on a system, C must run there first.
Unless you are talking about a Python not implemented in C, or a C compiler that was created to only compile the Python interpreter (and not all of the C language), then there is no other way around this simple fact.
A standard-conforming program should be part of the topic here, not "typical/idiomatic" non-objective software. Just because most people may not write portable C code doesn't make the language itself more or less portable. It just makes those particular programs less portable.
Plenty of C code is highly portable. Plenty is not. Same can be said of Python. It is moot. The language is under discussion here, not any set of specific programs written in the language.
Also, beside that simple logical conclusion, it is 2-3 orders of magnitude easier for me to port a C program I wrote to a new platform than it is for me to port the Python interpreter just to support my Python program on a new platform. Nevermind all of the features I would have to port (or neuter) inside Python which my program may not even use.
> By definition, the C language is more portable than the Python language, because in order to run Python on a system, C must run there first.
What definition are you using, and what practical use is your definition? You seem to be defining "portable" in some absurd, irrelevant way so that your preferred language "wins", regardless of what that actually means.
> A standard-conforming program should be part of the topic here, not "typical/idiomatic" non-objective software. Just because most people may not write portable C code doesn't make the language itself more or less portable. It just makes those particular programs less portable.
> Plenty of C code is highly portable. Plenty is not. Same can be said of Python. It is moot. The language is under discussion here, not any set of specific programs written in the language.
The standard is just some words on a page. The programs and tools are what give it meaning. If 90%+ of the things we call "C programs" don't conform to the standard, it's not reasonable to treat the standard as the definition of a "C program". And for any practical, real-world decision like "should I write my program in C or Python", the typical/idiomatic is the question that matters.
> it is 2-3 orders of magnitude easier for me to port a C program I wrote to a new platform than it is for me to port the Python interpreter just to support my Python program on a new platform. Nevermind all of the features I would have to port (or neuter) inside Python which my program may not even use.
How often do you port things to a platform on which the Python interpreter doesn't already build? And how large are these programs for the 2-3 orders of magnitude? (I guess small in any case if you're talking about writing a program yourself rather than in a team). The Python interpreter is actually pretty small.
There are C compilers for N platforms. There are Python interpreters for M platforms. Since the Python interpreter is written in C, N >= M.
The programs I have ported are more than a million lines. Not the largest, sure, but not trivial. And they run on platforms where Python does not.
The size of the C code or the Python interpreter code does not matter. What does matter is what subsystems are required. Python is general purpose and has a wide set of requirements (networking, file systems, process manipulation, dynamic loading, the list is quite long). Oh, and the Python list includes compiling C programs (for extensions). A second reason backing my absurd definition.
The C programming language requirements placed on the hosting environment are much smaller. So the language is more portable for that reason as well. It is beneficial to only port what you use, instead of having to port a monolithic interpreter, including the parts you don't need (which may not even run on the target platform easily or at all).
> There are C compilers for N platforms. There are Python interpreters for M platforms. Since the Python interpreter is written in C, N >= M.
Sure, but existence of a compiler is only a small piece of portability. As a FreeBSD user I'm very used to downloading a random program and finding it won't run on my platform. Happens a lot more often when the program's written in C than when it's written in Python.
I think portability is best understood as a measure of how difficult/expensive it is to port a typical program in that language to a new platform, because that's the question which is likely to be relevant in practice.
> The portability of a language is best understood as how difficult/expensive it is to port a conforming program written in the language in question.
What proportion of the things that are referred to in ordinary, everyday language as "C programs" would you estimate are conforming? Maybe 0.01%? If I'd meant "conforming C code" I would've said "conforming C code".
Writing conforming C is not a realistic choice for most use cases. E.g. there are very few experienced conforming C developers available.
I don't think any of your assertions are true at all.
There are lots of conforming C programs (just look at the huge list of software which compiles on a huge list of platforms).
There are tons of experienced C developers. Who do you think writes all of that conforming software?
And none of this is related to the topic at all.
If you really can't get your head around the simple idea that the portability of the language is different than the portability of some arbitrary program written in the language, then just look at it this way: If you want to write a program in a language, and you pick C, your program will run on more platforms than if you pick Python.
This is true, regardless of any other argument you might raise, simply because Python won't run on a platform until C runs on that platform (since Python is written in C).
We've been over this. Ad nauseum. Please stop trolling.
> There are lots of conforming C programs (just look at the huge list of software which compiles on a huge list of platforms).
And look at how much of that list breaks when a new version of GCC introduces a minor improvement in optimization.
> just look at it this way: If you want to write a program in a language, and you pick C, your program will run on more platforms than if you pick Python.
That is the question to ask. And your answer simply isn't true. It will (in the overwhelmingly likely case) run on more platforms if you pick Python. Certainly if you hold costs constant, which is surely the only way to compare. If you're willing to spend unlimited time and effort on portability then your C program will run everywhere, but so will your Python program (since you can just port the Python interpreter).
> This is true, regardless of any other argument you might raise, simply because Python won't run on a platform until C runs on that platform (since Python is written in C).
Not actually true (Jython exists and could run on platforms that don't run C), but it doesn't matter. It is overwhelmingly likely that the Python interpreter will run on more platforms than your C program will.
> We've been over this. Ad nauseum. Please stop trolling.
Exploitation of some of those issues can be prevented using the new RAP GCC plugin by grsecurity. Unfortunately, it is only available to paying customers.
Does anyone know how much it costs to be a "paying customer" at the individual level? Im not OVH with a sea of rack mounts, I am just interested in covering a handful of machines.
I'd only go so far as to agree that vulnerabilities which are bug related maybe _detected_ by static analysis.
Preventing vulnerabilities is an entirely larger problem not addressed by static analysis alone. Architectural security flaws are outside of the scope for static analysis. I'm not trying to nitpick semantics but in this case I think it's important to understand that, in this context, prevention and fixes as well as bugs and flaws need to be differentiated.
A lot of C++ projects aren't written in modern C++ and out of the few that are, even then, programmers will mix in older less secure C++ because they lack experience with modern C++. So habits can actually hurt security.
Ultimately when you have a language which mixes secure and insecure practises and lets the programmer decide, as an outside observer you have to assume the worst unless shown otherwise. C++ can be written very well, but C++ can also be written no better than C, it is project by project for which is which.
Other languages don't have this issue. If you see a Rust, Go, Java, C#, etc block of code you can make certain assumptions about what classes of security issues it won't have.
> Most of the problems with C you are implying here are non issues in modern C++.
No, they aren't. Use after free is just as exploitable and is a severe concern in C++. In fact, it's worse in C++ than in C, due to the ubiquity of vtables.
Everywhere. We have been using smart pointers for a long time.
Modern C++ doesn't add any memory safety protection beyond that which was already available with custom smart pointers in earlier versions of C++. In fact, I think modern C++ is less safe than earlier versions, due to new classes of potential bugs like use-after-move and the ease with which closed-over variable references in lambdas can become dangling.
Most the horrible security bugs in Java show up in the sandbox, where attackers can supply arbitrary code for you to run.
In contrast, Java as a server language has an excellent security record IME. The last public patch panic I can remember was in 2011 with the denial of service bug regarding parsing of floating points. There has been other security bugs regarding cryptography etc, I'm sure, but in general you can feel very secure running Java on your servers.
It is a shame that security bugs for both are bundled together, making every sandbox compromise a "Remote exploitable" vulnerability. The "applet" use case should probably just die, there is no indication that Java sandboxing will ever be secure, the design is unsound.
Java as a server language has a record of nasty serialization-related RCE vulnerabilities. Of course, they're in popular Java libraries used on the server rather than the language itself, just like this bug was in a popular C library rather than the language itself - but Java makes it very easy to accidentally write that kind of vulnerability. In fact, just loading two unrelated libraries that are individually safe sometimes create an exploitable RCE condition in Java. That's worse than even C.
No disputing that bugs can be written in any language. But by avoiding C/C++ you're excluding a specific class of bugs which have historically proved harmful.
You can write exploitable code in Java. But you'd actually have to try if you wanted Java to be able to write arbitrary memory or execute arbitrary code.
Essentially any bug that can be written in Java/Go/Rust/etc can be written in C/C++. But some C/C++ bugs are extremely uncommon in other languages, or you have to actually TRY to introduce them.
> But you'd actually have to try if you wanted Java to be able to write arbitrary memory or execute arbitrary code.
Depends on your definition of arbitrary. Higher level languages have higher level exploits. While injecting x86 shellcode into a java process is probably hard, many java applications have been vulnerable to serialization bugs which result in the execution of arbitrary bytecode.
It also needs to be said that this is generally not a reasonable reason to pick C over Rust. Memory-safe languages are effective defenses against these flaws.
>Bugs can be found in code written in all languages.
But not all languages frequently produce security vulnerabilities as a result of common types of bugs that are due to error-prone humans having to do things that should be done for us automatically in the year of our Lord 2016
Java applets have security issues today. That's a situation where you are allowing random websites to execute arbitrary code on your computer. Flash has the same issues. So don't do that.
Don't confuse Java applets (and the lack of security thereof) with the JVM as a development platform. I'd bet on the security of a Java application over that of a C/C++ application any day.
To be clear, are you referring to security bugs in the Java standard library (written almost completely in Java), or those in the JVM itself or the browser plugins (written almost entirely in C++), or in Java code bases?
The vast majority of the high profile Java security bugs have been in the second, which would be more of a ding against C++ than Java the language, wouldn't it?
I think it would be against Java in sense Java does not support writing high performance code like Java runtime / security code etc. Now it may not have errors as much as openssl but that argument will be about implementation quality not against C/C++.
To be clear, I am not a security researcher, and I haven't verified the severity of these issues. But in 2016 alone there are 16 CVEs which is 4 per month.
I'd say C is more of a symptom and exacerbating factor of the cultural failing. C makes it easy to achieve performance and time-consuming to achieve correctness.
When the people funding the work (and this includes people donating their own time) can see performance issues but not correctness issues, guess which ones get fixed?
There is absolutely no doubt in that. But people also join the Marines, so....
I'll speak this heresy - I think "people donating their time" has a bunch of problems, not least of which losing the data/information that would be gained from pricing that labor.
Yeah, it's possible to write flawless code. The problem is that it doesn't happen in practice. You know, where people are actually relying on this code that "could be flawless but isn't" so they can run businesses, maintain privacy, etc.
Which is designed to validate TLS certificates. This is doing the ASN.1 parsing and signature verification in a zero-copy, memory safe way, built on top of ring for the core crypto primitives.
Now, this isn't a full implementation of what you would need for replacing OpenSSL for certificate handling (and also isn't yet complete); in particular, this is extremely limited in scope as only being for client-side certificate verification. This particular OpenSSL issue was when parsing and re-encoding certificates, which is out of scope for webpki. But it is a good starting point for demonstrating memory-safe and efficient handling of complex tasks like parsing and verifying of TLS certificates.
The sad part is that already in the mid-60's and 70's, in computers much less powerful than the PDP-11, bounds checking wasn't a problem as such.
In the few cases where it was a real problem for the application being written, it could be selectively disabled. Which according to C.A.Hoare in his 1981 speech, most people didn't want to do anyway.
It only became a problem in the industry thanks to C.
No, it became a problem when 1) microprocessors evolved into being able to do real work 2) Vast hordes of the great unwashed (I'd say even including me ) were vacuumed into the resulting void and 3) tools vendors for languages with better safety furniture were found wanting. Delphi existed but was found wanting. Ada ads were in the back of oh so many magazines. But...
And I'd throw in 4) - the CS industrial complex failed to address this except to rail against it. The first CACM I ever got was the one with "Reflections on trusting trust." If a Haskell or Rust is the answer, it needs to be more interesting. And if you say "but Java", name your second; I will meet you on the field of honor at dawn :)
I personally found "nobody is going to save you; you are on your own" very liberating but perhaps that is unusual. "In order to live outside the law, one must be honest." - Bob Dylan.
Having written C for a living for about two years, I have to admit that a large part of me really likes C. But also, I would have commited unspeakable acts just to have a compiler switch to enable bounds checking, no matter how badly that would have affected performance.
Even when one knows what one is doing - or thinks so, anyway - these errors are so easy to sneak in and such a pain to track down. (Especially since we were using OpenWatcom whose debugger did not show useful data when the program crashed, which is usually the time one needs it the most...)
> I would have commited unspeakable acts just to have a compiler switch to enable bounds checking
-fsanitize=bounds in GCC6 (probably recent Clang also). Runtime overhead isn't even that high.
Lately I've been trying to use Ada for things I would use C for—it's quite a nice language that gives you all the power of C but safe by default (i.e. you generally have to try to shoot yourself in the foot). Also concurrency in Ada doesn't consist of “sacrifice kittens to Cthulhu and pray that it works”, which is a nice feature.
I had used valgrind before on a toy project to find memory leaks, but I had not used any of the other tools it offers. I remember that back then it did not work terribly well for bounds checking, but that was years ago. 2009-ish.
Electric Fence I have heard of before, but I never tried it. I will definitely come back to these links when I am working in C the next time.
Just curious - what do you do for a living that means you have to write C?
I'm interested in really learning C or C++ more indepthly. I feel like I am very proficient in high level programming knowledge (other than design patterns).
Basically, when or if the web dev industry falls out, I want to have a good back up of knowing low-level languages.
Anything in embedded systems. I write code for high performance satellite terminal (ground side) and also point-to-point radio systems (ultra-low latency, millimetre wave stuff). 90% of the code I write is in C, about 60% of the time on microcontrollers with <256KB of RAM, and the rest in a cut-down Linux running on an ARM Cortex A8.
I'd actually prefer to use C++ for the Linux stuff (std::string, smart pointers and STL containers would save me so much time), but we don't have the C++ standard library in our custom distro.
I imagine that, like with Modula's, you could probably turn off safety features where you needed for performance or real-time. I haven't confirmed it, though.
I used to work at a small company whose main business was building waterjet cutting machines. My job was maintaining the software used to mark bad spots on leather hides and placing cutting patterns on them. (Unfortunately, I spent most of my time trying to come up with a better way of placing cutting patterns on hides which involved a lot of computational geometry and basically went way over my head. Well either that, or it was just a really tough problem.)
I got the job kind of by chance, I put out an ad in a local newspaper that basically said "programmer / sysadmin looking for work".
If you want to learn C/C++, trying to look at and understand some of the open source code floating around might be a good starting point, since there is a lot of it. Pick up some project you find interesting and try to make a few modifications. Alternatively, try building a simple project of your own and take it from there.
C has its fair share of problems, and one needs to be aware of them, but it can also be a very fun language. I cannot say much about C++ either way, I find it a bit intimidating, but that's just me.
(Of course, if you are interested in lower-level languages, Rust is getting a lot of attention these days... it claims to offer much of the performance of C/C++ while avoiding most of their problems. I have not looked at it myself, though.)
Web dev will probably die down, and with robots and automation and IoT becoming more prevalent, embedded programming will probably become popular.
But hopefully by then embedded devices won't force us to us languages like C or C++. If the manufacture of embedded platforms scales up enough (more than it already has) then maybe we'll get devices with gigabytes of ram and insanely fast processors, so we can just write code in whatever language we like and have it be performant.
A pipe dream maybe, but it wasn't that long ago that people were exclusively writing in assembler when performance was critical.
There are Basic, Pascal, Oberon and Java compilers for embedded systems, but it is a niche market.
As for C and C++ being fast. They are now, but once upon a time their compilers generated very bad code for home computers, those that are nowadays used as embedded ones.
Yet mainframes less powerful than the PDP-11, already had better languages available.
So attack that from the EE side of industry. As long as it lasts, embedded has been pretty good hunting for a long time, but the culture shift you might be exposed to looks to me to be profound.
Maybe the machine is at fault? Maybe the problem is not with C, but with a hardware architecture that does not offer finer granularity than page allocations? What if everything returned from malloc() would be bound checked by hardware? What if, instead of overwriting a malloc_chunk or the saved value of the instruction pointer, a hardware interrupt would be raised? Maybe the return stack and the parameter stack can be separate? Maybe someone solved these problems in hardware before the majority of us was born?
The Intel 432 could do that (1982). Unfortunately, the Intel 432 was also slow and very complex (it's the only architecture I'm aware of that was object oriented at the microcode level).
The Intel 80286 could also do that (some of the features of the Intel 432 found their way into the 80286) but no one who lived through those days wants to return to those days (small, compact, medium, large and huge memory models anyone?).
The Intel 80386 (and really, any Intel CPU that can still execute 16-bit code) can do that. But then you are programming a glorified 80286 and well ... see above.
But okay, you want more details? You have segment registers, CS, DS, ES, SS (and the 80386 give you a few more) that define the base address and limit. To get byte granularity, you are limited to 65536 bytes and you are dealing with a very odd-ball 32-bit address (16-bit segment, 16-bit offset) or a 48-bit address (16-bit segment, 32-bit offset limited to 65,535). If you want more memory per segment, you lose the byte granularity (promoting sloppy coding, etc).
How the fuck would that work? Today, the page table is cached in a TLB and modern Intel processors only have 1024 entries!
If you have a new entry in the page table for every malloc, dereferencing virtual memory will require a page walk for pretty much every memory access. You can't cache millions of entries.
To those downvoting lmm, look up C11 Annex K. There are standards for adding bounds checking and the like to C, but compiler makers don't implement them because they're "too slow." We need to demand more of our tooling makers.
Those "features" would be built into the platform. Instead of having the return stack and the parameter stack share memory space, the stacks would be kept separate by design. Instead of having malloc_chunks, the MMU would keep allocation metadata separate from the data presented to the user space. Any process overwriting an allocation would result in a segmentation fault, not on a per-page basis but on a per-allocation basis. It would be enforced by the architecture. It would not be optional. It would be by design.
And I think the difficulty of manual memory management is vastly overrated. A bug is a bug is a bug. If OpenSSL is ... safety critical, then it should be treated as safety critical going forward. Tools don't make bugs, people make bugs.
Of course, someone's free to write a replacement for OpenSSL in Rust and see how far that goes.
FWIW, there used to be a rich suite of very respectable ASN.1 verification tools, at least the subset of ASN.1 used in SNMP.
> I think the difficulty of manual memory management is vastly overrated.
And your line of thinking is why we're going on 50 years of empirical evidence that people are terrible at manual memory management, and as with politics there's a pretty small common subset between those who believe themselves competent and those which should be trusted with it.
> A bug is a bug is a bug.
An error message is not a misprint is not a denial of service is not a DB penetration is not a crypto break is not a network ownage. "A bug is a bug is a bug" as long as your code is not used by anybody or trusted with anything.
Manual memory management can be done safely and economically. This really is as simple as "the thing in the loop which has agency is the human, so the buck stops there."
And a bug is still a bug is still a bug. This is not even a single point of failure; it's an ecosystem failure.
> Manual memory management can be done safely and economically.
There is very little evidence of that, and extensive evidence to the contrary.
> This really is as simple as "the thing in the loop which has agency is the human, so the buck stops there."
Not only is that exactly the opposite of one of the few groups which did manage to get somewhat good at this (the on-board shuttle group), it's also the incorrect and inane thinking which led e.g. surgeons to resist checklists. Again, your line of thinking has only led us to half a century of failure.
Agency is irrelevant, people are good at creative elements but terrible at systematic ones, yet you're pushing more systematic work onto the one piece of the chain least suited for it, then blaming it for its failure.
> This is not even a single point of failure; it's an ecosystem failure.
> Manual memory management can be done safely and economically.
Citation? You will need evidence to back this claim up, and the evidence shows that C and C++ apps are far more vulnerable to these bug classes than apps written in memory safe languages.
> This really is as simple as "the thing in the loop which has agency is the human, so the buck stops there."
This is like saying "we don't need instruments in our planes' cockpits because the thing in the loop that has agency is the pilot, so the buck stops there".
Memory safe languages are tools that help programmers not write these kinds of vulnerabilities. We use tools because we as humans are imperfect.
While C (and its love-child C++) bizarrely appears high in most synthetic programming popularity rankings, I personally doubt more than 5% of developers (if that) ply their days in it, or have more than a passing competency in it.
Everyone is programming in Java, C#, JavaScript, and so on. Aside from myself, I haven't a single professional peer who develops on C (anecdotal, of course, but this is a pretty big net crossing multiple cities and industries) in any real way.
It just happens to be that much of the most important software is written in it. Maybe there's something in that.
I wouldn't minimize the amount of C(++), or the amount of mis-counting for jobs where C(++) is a desired skill, but not primary in the job role. There are a lot of jobs out there working on embedded systems where C/C+ are king. Though I do hope that Rust gains more traction in that space to avoid certain classes of bugs.
I would guess that most development involves JS, as I would say that most development is directed in web applications of some kind. Though there are backend languages as well. For the types of development jobs I'm used to looking for, I see a lot more C# and Java, with some uptick in Node and Python. Excluding PHP (because shiver).
I have several friends who work in the embedded space, and that is not small by any means.
I developed most of my hobby projects in C until recently. It's not as rare as you think. C is a good language to think in.
Lately I've been switching to Ada, which I actually quite recommend if you like C. Shame it never really caught on, but using C libraries from Ada is trivial so there's not much of a library issue in spite of lack of popularity.
~5% of developers, while small compared to the whole, still encompasses about a million of the estimated 20 million developers worldwide.
The other poster rightly mentioned the embedded space, and that is absolutely true (and indeed it is where I gained my affinity for C and C++), however there are easily 20 middleware / web / mobile developers for every embedded developer.
In all seriousness, repeated occurrences such as these should make people consider other languages for new projects. Despite being quite a C fanboy, I have to admit that even I think manual memory management should be a thing of the past now.