> I would add that there is absolutely no technical reason why an application expressed in one language need not be as fast as an application expressed in another.
There absolutely is.
If I write a program in a language that's fuzzy enough that a machine (which follows the specifications of the language) cannot definitely tell what I was asking it to do at compile time, there are compiled optimizations it can never rely on.
You make a good chunk of that back with JIT / predictive runtime compiling, but it's never going to be the same. At minimum, you're burning CPU and cache on your miss rate. To say nothing of the additional overhead of running JIT while executing in the first place.
As I understand it, this is the entire purpose of things like Vulkan / Webasm. (I realize they're primitive targets vs high level language, but same principle applies at any level of the stack)
Happy to have someone tell me I'm wrong, but the original quote flies in the face of everything I learned in language / compiler design.
This is a good comment. Ideally, JIT would reach some asymptotic improvement and then stop running. It would only need to start running again if the code changes.
I think however there is some fundamental communication issues talking about "compilers vs language". Optimizing compilers, by definition, rewrite your code into a "better" form. Even if you are writing C/C++ you probably don't really know what's happening at the register level. Heck, with todays CPUs I wonder if even assembly people know what's actually happening in the registers!
And in x86 at least, even the assembly gets sliced and diced behind the scenes.
There have been comments to the same effect on HN before, but I look at language design as an API between humans and computers. Computers need as strict rules on how to execute a thing as possible: humans need something comprehensible. The ideal language finds ways you can increase the former without decreasing the latter (in ways they notice).
There absolutely is.
If I write a program in a language that's fuzzy enough that a machine (which follows the specifications of the language) cannot definitely tell what I was asking it to do at compile time, there are compiled optimizations it can never rely on.
You make a good chunk of that back with JIT / predictive runtime compiling, but it's never going to be the same. At minimum, you're burning CPU and cache on your miss rate. To say nothing of the additional overhead of running JIT while executing in the first place.
As I understand it, this is the entire purpose of things like Vulkan / Webasm. (I realize they're primitive targets vs high level language, but same principle applies at any level of the stack)
Happy to have someone tell me I'm wrong, but the original quote flies in the face of everything I learned in language / compiler design.