Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> the fundamentally flawed assumption (that you don't need dynamic scheduling, AKA OoO processing)

Could this have worked better with JIT-compiled applications, e.g. Java given a sufficiently clever JVM, where assumptions can be dynamically adjusted at runtime?

(Edit: As opposed to an AOT compiler.)



This was the grand hope but it never panned out. It’s possible now that a sufficiently brilliant compiler could make a difference since there was nothing like LLVM at the time and GCC was far less sophisticated. One of the many acts of self-sabotage Intel committed was insisting on their hefty license fees for icc, which meant that almost all real-world comparisons were made using code compiled using GCC or MSVC, which were not as effective optimizing for Itanium. There’s no way they made enough in revenue to balance out all of those lost sales.

The other point in favor of this approach now is that far more code is using high-level libraries. Back then there was still the assumption that distributing packages was hard and open source was distrusted in many organizations so you had many codebases with hand rolled or obsolete snapshots of things we’d get from a package manager now. It’s hard to imagine that wouldn’t make a difference now if Intel was smart enough to contribute optimizations upstream.


Yes. Open source, high-level libraries, SaaS/Cloud, good dynamic translation (e.g. Rosetta), etc. make the sort of backward-compatibility that Intel/HP failed so miserably in providing much less of a big deal today. One of the driving forces behind Itanium was that, not only was developing custom microprocessors and OSs for a single company expensive, but even once you'd made that investment, ISVs were reluctant to support your low volumes for any amount of love and money.


It’s definitely interesting looking at ARM now. It’s helped by having consistently had much better price/performance but also the fact that things like phones meant a ton of the primitives people would need to switch server applications were already taken care of. Intel really would have been better off cutting their marketing department and hiring 50 more developers to work on open source like GCC, OpenSSL, Linux, etc.


Intel actually has a lot more software development than they're generally credited with. It's mostly "just" hardware enablement but when the Linux Foundation was still providing external numbers on Linux kernel contributions by company, Intel was one of the very top contributors.

With respect to ARM, Intel pretty much blew it, especially on mobile. They were so determined to exploit their x86 beachhead. I remember at an IDF, they were even trying to make a case for how it was important to run x86 everywhere so that Flash would run consistently.


I’ve often felt that Intel’s embrace of open source is someone wanting not to repeat the Itanium loses. They seem to have a much better relationship with key open source projects now.


> […] and GCC was far less sophisticated

With all due respect, this is simply not true. Especially for the time, GCC was the most sophisticated compiler out there, and the only one that could be easily retargeted to a new or another platform due to the use of the intermediate representation language (IRL). A new code generation backend could be boostrapped within days using the IRL. Cross-compilation for any supported target platform whilst running on the same host was also only possible with GCC. There were no other known precedents at the time (I am not counting pcc, the portable C compiler, due not being comparable to gcc).

In terms of the code generation, GCC was quite up there as well albeit performance and quality of the generated code varied across platforms, sometimes wildly. E.g. the native Sun C compiler generated consistently faster code for SPARC (although their C++ compiler that they had acquired from a third party was buggy as hell).

For Itanium, the GCC Itanium backend was not efficient, and it was a well known problem. On 32-bit x86, GCC generated faster and better code than most commercially available C/C++ compilers with a few exceptions being Intel and Metaware High C compilers (both of which not being widely available and being exorbitantly expensive for a average developer) and being comparable or faster to the Watcom C compiler (Watcom did have an edge over GCC on producing much faster floating point code and the default C struct alignment rules, and GCC had an edge due to allowing control over how many CPU registers could to pass input parameters into a function). Id Software, with the release of Quake 1, had ditched the Watcom C compiler and replaced it with GCC (DJGPP) due to GCC generating better code (I think John Carmack wrote about it at some point). GCC did not support Windows well, though.

GCC and LLVM have both been very sophisticated compilers albeit pursuing different objectives. LLVM appeared due to disagreements over the GCC licensing that RMS was insisting on – to preclude GCC from becoming extensible and allow 3rd parties to produce closed source plugins. So, LLVM was conceived as a modularised and extensible design being more conducive to research, experimentation and extensibility at the expense of supporting fewer platforms. GCC and LLVM have both eventually caught up and have now largely reached the feature parity with each other (with GCC still supporting a larger number of platforms and being a go to choice for embedded development).


My point in that sentence was that GCC in the late 90s was less sophisticated than it is now. As you noted it was also not the best for Itanium (also Power and, if memory serves, Alpha) which meant that the large amount of software which was compiled with it for compatibility & ease of support look disproportionately worse. For Itanium that was a huge problem since it relied so heavily on the compiler.


Wasn't icc rather popular at some point? AMD later suffered in benchmarks where it came out that icc generated code ignored CPU feature flags when the CPU vendor id did not match Intel.


That depends on how you define popular. It was never used for a majority of compiler runs since e.g. no open source project could use it but performance-sensitive users definitely licensed it.

One concern I remember was correctness: one company I worked at didn’t find a benefit worth dealing with a second compiler’s quirks and IIRC some scientists I supported evaluated it but never used it because some of their model output varied (classic floating point drift).


Wasn’t that always the issue with Itanium - that it could have been fast with a sufficiently clever compiler? The problem seemed to be no one was clever enough to write that compiler.


That's what I've heard as well. But a JIT compiler (like the JVM) might not have to be as clever as an AOT compiler, as it can change its optimization decisions later, so perhaps that might have been more feasible?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: