> No bounds checking exists to make sure that the slots_ array is large enough to accommodate the specified index, because the UnsafeSetReservedSlot function assumes, as the name implies, that the caller will pass only suitable objects.
So this is actually exploiting C++ code that you would be unable to write in javascript/the safe subset of rust because they both require bounds checks.
> This post looked at a great vulnerability demonstrating that even if you replace existing code with JavaScript, you could still be prone to memory corruption.
I mean, no one thought replacing some code with memory safe languages would eliminate memory safety issues in the remaining code written in memory-unsafe language. That makes no sense.
This is an interesting blogpost, the initial vulnerability (playing with javascript prototypes to get access to APIs you shouldn't have) even lives up to the "non-memory safety security bug" description. But the framing around memory safety is pretty ridiculous.
Yeah, I would argue that SpiderMonkey's self-hosted JS is neither JavaScript nor memory safe. It's a different language that reuses much of the JavaScript implementation. It doesn't support either of the statements `foo()` or `obj.prop = 3`. It does support a whole bunch of memory-unsafe operations that JavaScript does not have, through intrinsics. So if you take a language and remove some stuff and add some other stuff, you end up with neither a subset nor a superset—it's just a different (but related) language.
(I work on SpiderMonkey. I've even suggested that we more explicitly recognize that self-hosted JS is not JS, and compile it differently such that you could add back some of the missing features with different, safer semantics. But it's not really my area and there's a large design space to choose the right language from, so I'm not claiming that that's the right thing to do.)
I would even go a step further and say that there was no memory safety bug in the code. The attacker-provided exploit contained the memory safety issue by calling potentially unsafe APIs "incorrectly".
I mean, if the conclusion is "you shouldn't assume that JavaScript code is free of memory safety problems", that's absolutely true. JavaScript is a memory safe language, but the VM that executes it, and the native code that it binds to, can have bugs that punch a hole in the theoretical model.
But your takeaway shouldn't be "memory safety isn't valuable because memory-safe languages can have bugs". Empirically, programs written in memory-safe languages have far fewer memory safety problems than programs written in non-memory-safe ones do.
> Although his exploit used some memory corruptions, the vulnerable code was written in a memory-safe programming language: JavaScript!
That's not really anything new. There was what amounts to an "ACL" issue at the JS level, which exposed the vulnerable C++ code. Sort of like if you have a vulnerable web app behind an improperly configured firewall - you wouldn't say you exploited the firewall though if you owned the service behind it, although you could say that you exploited both.
VMs using unsafe languages have always had this problem. QEMU has this, V8, JVM, Flash, etc.
> This post looked at a great vulnerability demonstrating that even if you replace existing code with JavaScript, you could still be prone to memory corruption.
Well, no one replaced anything with javascript. It's a javascript VM. If it were a C VM the issue wouldn't be any better or worse, "attacker is executing code in the VM" is the assumed situation.
So yeah I think it's totally fine and good to say that VMs make for interesting exploit chains across languages and runtimes - that's for sure true and one of the biggest issues with implementing safe VMs (sharing memory across runtimes, needing to keep memory as RWX, all sorts of nutty stuff). But what we see here is exactly the same standard thing we always see - an unsafe C++ program did unsafe things with untrusted input.
They did. They replaced the process for implementing VM internal operations with a process where VM internals are partially written in JS itself. If the sequence that the spec calls "GatherAsyncParentCompletions" had been implemented in C++, this leak wouldn't have occurred, because they would have been using an idiomatic list type (whether language-idiomatic, or a specialized data type standardized across the codebase), instead of a JS array.
There's a distinction here between using safe languages to implement a program and allowing an attacker to write code in your safe language and attempting to avoid sandbox escape.
For example if I wrote a web server in JS or WASM I just made things much harder for someone who connects to port 80 and tries to attack the server through a buffer overflow etc. A lot of the vulnerabilities that you would look for in a C/C++ web server just won't be present.
However a modern browser is trying something much more difficult: Letting the attacker write code in the "safe" language and trying to avoid giving them full access to the machine. This is what Flash, Java, JS, and WASM attempt and it's the boss level of defensive security work.
The article mixes those up in the intro, but it's an interesting article, especially the use of embedded fp constants to construct shell code.
This vulnerability exists because JS code allows changing object prototypes at runtime (which is pretty unnecessary feature; in most cases we just need Java-like immutable classes). Because of this leaking an empty array into userland code becomes a vulnerability that is very difficult to spot.
So this is actually exploiting C++ code that you would be unable to write in javascript/the safe subset of rust because they both require bounds checks.
> This post looked at a great vulnerability demonstrating that even if you replace existing code with JavaScript, you could still be prone to memory corruption.
I mean, no one thought replacing some code with memory safe languages would eliminate memory safety issues in the remaining code written in memory-unsafe language. That makes no sense.
This is an interesting blogpost, the initial vulnerability (playing with javascript prototypes to get access to APIs you shouldn't have) even lives up to the "non-memory safety security bug" description. But the framing around memory safety is pretty ridiculous.