Another interesting episode "after the 486" was the switch from 32 bit to 64 bit, where Intel wanted to bury the ghost of the 8086 once and for all and switched to a completely new architecture (https://en.wikipedia.org/wiki/IA-64), while AMD opted to extend the x86 architecture (https://en.wikipedia.org/wiki/X86-64). This was probably the first time that customers voted with their feet against Intel in a major way. The Itanium CPUs with the new architecture were quickly rechristened "Itanic" and Intel grudgingly had to switch to AMDs instruction set - that's the reason why the current instruction set still used by all "x86" CPUs is often referred to as AMD-64.
What I find interesting is that Intel engineers actually designed their own 64-bit extension, somewhere along the same lines as AMD64.
Intel's marketing department threw a fit, they didn't want the Pentium 4 competing with their flagship Itanium. Bob Colwell was directly ordered to remove the 64-bit functionality.
Which he kind of did, kind of didn't. The functionally was still there, but fused off when Netburst shipped.
If it wasn't for AMD beating them to market with AMD64, Intel would have probably eventually allowed their engineers to enable the 64-bit extension. And when it did come time to add AMD64 support to the Pentium 4 (later Prescott and Cedar Mill models) the existing 64-bit support probably made for a good starting point.
Around the time of K8 being released, I remember reading official intel roadmaps announced to normal people, and they essentially planned that for at least few more years if not more they will segment into increasingly consumer-only 32bit and IA-64 on the higher end
Part of the effort to ditch x86 was to destroy competition that existed due to second sourcing agreements. After already trying and losing in court the case to prevent AMD and others from making compatible chips, Intel hoped to push IA-64 for the lucrative high performance markets it dominated in PCs, and prevent rise of compatible designs from other vendors.
They were trying to compete with Sun and IBM in the server space (SPARC and Power) and thought that they needed a totally pro architecture (which Itanium was). The baggage of 32-bit x86 would have just slowed it all down. However having an x86-64 would have confused customers in the middle.
Think back then it was all about massive databases - that was where the big money was and x86 wasn't really setup for the top end load patterns of databases (or OLAP data lakes).
In the end, Intel did cannibalize themselves. It wasn’t too long after the Itanium launch that Intel was publicly presenting a roadmap that had Xeons as the appealing mass-market server product.
Yeah they actually survived quite well. Who knows how much they put into Itanium but in the end they did pull the plug and Xeons dominated the market for years.
They even had a chance with mobile chips using ATOM but ARM was too compelling and I think Apple was sick of the Intel dependency so when there was an opportunity in the mobile space to not be so deeply tied to Intel they took it.
I think the difference was that replacing Itaniums with Xeons on the roadmap didn't seriously hurt margins (probably helped!)
The problem with mobile was that it fundamentally required low-margin products, and Intel never (or way too late) realized that was a kind of business they should want to be in.
> and thought that they needed a totally pro architecture (which Itanium was).
Was it though ? They made a new CPU from scratch, promissing to replace Alpha, PA-RISC and MIPS, but the first release was a flop.
The only "win" of Itanium that I see, is that it eliminated some competitors in low and medium end server market: MIPS and PA-RISC, with SPARC being on life support.
The deep and close relationship of Compaq with Intel meant that it also killed off Alpha, which unlike MIPS and PA-RISC wasn't going out by itself (Itanium was explicitly to be PA-RISC replacement, in fact it started as one, while SGI had issues with MIPS. SPARC was reeling from the radioactive cache scandal at the time but wasn't in as bad condition as MIPS, AFAIK)
I never used them but my understanding is that the performance was solid - but in a market with incumbents you don't just need to be as good as them you need to be significantly better or significantly cheaper. My sense was that it met expectations but that it wasn't enough for people to switch over.
Merced (first generation Itanium) had hilariously bad performance, and its built in "x86 support" was even slower.
HP-designed later cores were much faster and omitted x86 hardware support replacing it with software emulation if needed, but ultimately IA-64 rarely ever ran with good performance as far as I know.
Pretty sure it was Itanium that finally turned "Sufficiently Smart Compiler" into curse phrase as it is understood today, and definitely popularized it.
> It’s as if they actually bought into the RISC FUD from the 1990’s that x86 was unscalable, exactly when it was taking its biggest leaps.
That's exactly what was happening.
Though it helps to realise that this argument was taking place inside Intel around 1997. The Pentium II is only just hitting the market, it wasn't exactly obvious that x86 was right in the middle making its biggest leaps.
RISC was absolutely dominating the server/workstation space, this was slightly before the rise of the cheap x86 server. Intel management was desperate to break into the server/workstation space, and they knew they needed a high end RISC cpu. It was kind of general knowledge in the computer space at the time that RISC was the future.
Exactly! But this was not just obvious in retrospect, it was what Intel was saying to the market (& OEMs) at the time!
The only way I can rationalize it is that Intel just "missed" that servers hooked up to networks running integer-heavy, branchy workloads were going to become a big deal. OK, few predicted the explosive growth of the WWW, but look around at the growth of workgroup computing in the early 1990's and this should have been obvious?
I'm not sure thats a fair description of server workloads. I'm also not sure it's fair to say Itanium was bad at integer-heavy, branchy workloads (at least not compared to Netburst)
The issue is more that server workloads are very memory bound, and it turns out the large OoO windows do an exceptional job of hiding memory latency. I'm sure the teams actually building OoO processors knew this, but maybe it wasn't obvious outside them.
Besides, Itanium was also designed to hide memory latency with its very flexible memory prefetch systems.
The main difference between the two approaches is static scheduling vs dynamic scheduling.
Itanium was the ultimate expression of the static scheduling approach. It required that mythical "smart enough compiler" to statically insert the correct prefetch instructions at the most optimal places. They had to strike a balance simultaneously wasting resources issuing unneeded prefetches and unable to issue enough prefetches because they were hidden behind branches.
While the OoO x86 cores had extra runtime scheduling overhead, but could dynamically issue the loads when they were needed. An OoO core can see branches behind multiple speculative branches (dozens of speculative branches on modern cores). And a lot of people miss the fact than an OoO core can actually take the branch miss-predict penalty (multiple times) that are blocked behind a slow memory instruction that's going all the way to main memory. Sometimes the branch mispredict cycles are entirely hidden.
In the 90s, static scheduling vs dynamic scheduling was very much an open question. It was not obvious just how much it would fall flat on its face (at least for high end CPUs).
Well, TBH it wasn't all FUD - hanging on to x86 eventually (much later) came back to bite them when x86 CPUs weren't competitive for tablets and smartphones, leading to Apple developing their own ARM-based RISC CPUs (which run circles around the previous x86 CPUs) and dumping Intel altogether.
It is interesting how so much of the speculation in those days was about how x86 was a dead end because it couldn’t scale up, but the real issue ended up being that it didn’t scale down.
Well, it turns out that it could scale up, it just needed more power than other architectures. As long as it was only servers and desktop PCs, you only noticed it in more elaborate cooling and maybe on your power bill, and even with laptops, x86 compatibility was more important than the higher power usage for a long time. It's just when high-performance CPUs started to be put in devices with really limited power budgets that x86 started looking really bad...
If this is true or not I don't know, but I worked on a project with an HP employee and we were talking about the Itanium. At some point the HP guy goes "You know we more or less designed that thing, right?"
I would tend to believe that the Itanium is an HP product, given that they've always seems more invested in the platform than Intel.
Yes, it was originally designed as a successor to HP PA-RISC and then brought to Intel. I don't know how much it changed from the original design during development at Intel.
Por qué no los dos? If "-ium" makes nerds think of an element name, and others of a premium product, all the better. I'd bet both of these interpretations were listed in the original internal marketing presentation of the name...
A dungeon with glass doors and emergency exit signs? In that case, I can imagine at least two alternative scenarios:
- "↑TIX∃" is not a mirror image of "EXIT", but some dwarven runes that mean something else entirely.
- The sign might be a ruse meant to lure you into a trap.
If you look at the detailed answers, some of the models have similar answers (e.g. Nemotron Nano 12B: "Suspicious of dungeon riddles, viewing the inscription as a potential trap or red herring."), but I'm not sure it's because they identified the word EXIT and thought it might be misleading, or because they didn't understand it...
One more similarity is how hard it seems to be to break up with an abusive partner: when I saw the "Windows anounces Recall - Linux increases its Steam market share by 25%" meme, I checked under https://www.gamingonlinux.com/steam-tracker/, and yes, in May 2024 it went up - from 1.9% to 2.32%. But in February 2025, it was back down to 1.45%. It has rebounded since, to 3.38% in January this year, but dropped to 2.23% in February. Not sure where these big fluctuations come from - maybe Linux gamers don't really play that often, so they only log on to Steam sporadically?!
Ok, I guess that explains the floppy shown in the 1995 "episode". Because floppies were already on their way out by 1995 - you still used them to copy data from one PC to another, but most software came on CD-ROM.
Except that telemetry can give you more complete (and foolproof) information than what users report. But yeah, that could also be solved by having debug info that users can attach to their report, the app doesn't have to "call home" for that...
I agree, but it's a cost/benefit thing. Most OSS projects aren't big enough to do anything with the telemetry, so you're just paying in goodwill for no reason.
Never mind an outsourced receptionist, some of those calls could be handled simply by the mailbox. Of course, some people will hang up once the mailbox message starts - but then again, some will also hang up once they realize they're talking to an AI chatbot, so...
This is the critical data —» how many people hang up on the AI chatbot vs how many people hang up on the voice message prompt.
If it is even close, well, the AI needs to be improved.
If the AI is way ahead, but still loses/drops more than a live receptionist (outsourced or in-house), the AI either needs improvement, or to be dumped for a live receptionist, and that's kind of a spreadsheet problem (how many jobs lost in each case, vs costs).
I think the question of lost opportunities versus costs is the best thing to look at here. You could pay a receptionist like 50-60k a year but they have to bring in the work. Maybe the AI dumps a percentage over a real receptionist but they still bring in more than the mailbox. But there's a cost to the AI too.
But the real question you should also ask is what else can that human do for you that the AI can't because they have eyes and ears and hands?
The question is more why employ a full time receptionist when fractional services are available and it’s an old well established industry. A couple hundred dollar a month could employ a human only when the phone rings and to schedule their visit plus any FAQ. I’m sure Ruby.com already has plenty of auto shop customers.
reply