I'm obviously not talking about programs written in memory-safe languages - my point is that they don't have buffer overruns.
About the programs written in unsafe languages, I think it's clear that I don't literally mean that every single one has a buffer overrun; you can ensure that simple programs don't, and I am also assuming that well-designed language runtimes and compilers do not. Otherwise, you'd gain nothing by going to a memory-safe language. In the real world, something like a JIT compiler or a JVM is far less likely to be unsafe than an average app, although it does have bugs and there were a couple of patches for buffer overruns.
Also, programs which essentially transform strings to strings, like compilers, are easier to make formally correct, or at least correct enough that you have confidence in them, than many other classes of programs. If you have to deal with timings, polling, complicated interactions with the environment - it all becomes much harder. Also, compilers can have tight specifications of what they do - I don't think it's possible to specify the behavior of a word processor to the same extent. For many C/C++ apps, it's just hard to ever say they don't have a buffer overrun.
So you can imagine our feeling when learning about the ROP attacks. They're going to write a compiler for their malicious code which targets a virtual machine made up of tiny pieces of our app. What kind of a world is this? And they only need one buffer overrun. A scary feeling, although, as someone already pointed out, the idea of running a little piece of code from somewhere is not new, and the attack is not the end of the world.
> I am also assuming that well-designed language runtimes and compilers do not.
I guess the big point I was trying to make in other parts of the thread is that this is sort of begging the question. If you can assume that runtimes and compilers are flawless, then of course you should use the biggest, hairiest ones you can get your hands on, and furthermore you should argue for every possible piece of functionality to be put into the runtime, so that it will work correctly and not have any bugs.
In practice, though, that is not the best possible factoring of the problem. If it were, you wouldn't be using C++! So I don't think it provides a very good guideline to use for questions like "would moving from C++ to Java make our code more reliable?" I think that answering that question properly involves a lot more reasoning about the particular runtimes and compilers involved, as well as the structure of your application — or lack thereof, if you really have an unbounded number of places you might have written a buffer overflow!
What you say about compilers' formal correctness might be true in theory, but in practice, all compilers contain a large number of bugs. I don't think it is true in theory, though. Optimizing a piece of code is AI-complete, and verifying the equivalence of the source and target programs is known to be uncomputable, since it subsumes the halting problem — in theory, it's impossible to even determine whether the compiler has inserted an infinite loop into your program, let alone whether the rest of its semantics are preserved.
The string → string nature of compilers does mean that they can be highly portable and not spend much time interfacing with the rest of the universe, though.
> I don't think it's possible to specify the behavior of a word processor to the same extent.
Maybe not, but you can probably specify the behavior of its string class to have no buffer overruns. I mean, you can write a word processor in Python, right?
About the programs written in unsafe languages, I think it's clear that I don't literally mean that every single one has a buffer overrun; you can ensure that simple programs don't, and I am also assuming that well-designed language runtimes and compilers do not. Otherwise, you'd gain nothing by going to a memory-safe language. In the real world, something like a JIT compiler or a JVM is far less likely to be unsafe than an average app, although it does have bugs and there were a couple of patches for buffer overruns.
Also, programs which essentially transform strings to strings, like compilers, are easier to make formally correct, or at least correct enough that you have confidence in them, than many other classes of programs. If you have to deal with timings, polling, complicated interactions with the environment - it all becomes much harder. Also, compilers can have tight specifications of what they do - I don't think it's possible to specify the behavior of a word processor to the same extent. For many C/C++ apps, it's just hard to ever say they don't have a buffer overrun.
So you can imagine our feeling when learning about the ROP attacks. They're going to write a compiler for their malicious code which targets a virtual machine made up of tiny pieces of our app. What kind of a world is this? And they only need one buffer overrun. A scary feeling, although, as someone already pointed out, the idea of running a little piece of code from somewhere is not new, and the attack is not the end of the world.