There's 2 main valid reasons larger companies won't touch AMD for servers:
1) You don't know if a given linux kernel/other software will work unless you test it ... for each future version
2) The firmware updates for Intel and AMD are different.
Additionally, the excellent Intel C compiler focuses on their own processors.
The above doesn't mean you can't choose AMD, but don't assume they're interchangeable CPUs.
Disclosure: I worked for Transmeta, whose entire DC was based an AMD servers. The reason was that Intel was a larger competitor for their code-morphing CPUs than AMD was.
Coincidentally, Linus Torvalds entered the USA on a work visa from Transmeta after DEC bailed on his job offer.
I bought CS22 at Transmeta's wind-down auction, which I will donate to the Computer Museum. Several large CPU designs during that era were verified on it because it was a 4 CPU Opteron with 64 GB RAM, and 32 GB RAM wasn't enough.
Aside from Apple's A-series, that was the end of Silicon Valley being about silicon. (Many of the chip engineers on my last project ended up at Apple on the A-series.)
>Additionally, the excellent Intel C compiler focuses on their own processors
This is a new and creative use of the word "excellent" to mean Intel are so dishonest they have been caught out using their compiler as malware delivery to make /your/ compiled binary test for an Intel cpu when being run by /your/ customer and if it finds your executable binary being run on a competitor, eg amd, makes the code run every slow path despite the optimised code running fast on that cpu.
Wildly dishonest. Malware delivery mechanism are somewhat more traditional uses of the English language to describe the Intel compiler.
You cannot trust Intel. They've earned that reputation all by themselves.
It's sneaky, it behaves badly and counter to the user's interests, and because it's a compiler, it propagates that bad behavior (though not in a self-reproducing viral fashion). It's fairly mild on the scale of malware—I'd rank it slightly less bad than adware, but roughly on par with AV software that deletes any tools it deems to be for piracy purposes.
Actually, if you want to run Wayland, and a more powerful GPU than Intel's integrated stuff, AMD has much better support, to the point that running Wayland isn't even an option on anything Nvidia (and Intel CPU + Radeon dGPU is relatively rare). Though I'm a bit confused about whether Wayland (as an experimental option) in upcoming Ubuntu 20.04 LTS is even supported. It should be, because X.org's universal trackpad driver sucks compared to what was available in Ubuntu 16.04, and overall gnome-shell feels clunky and a regression compared to Unity. Having just setup a ThinkPad E495 (Ryzen) over the weekend, I'm both impressed with easy out-of-the-box installation, but also seriously disappointed with gnome-shell and the state of Wayland that I'm considering alternatives to it.
> Though I'm a bit confused about whether Wayland (as an experimental option) in upcoming Ubuntu 20.04 LTS is even supported.
I've been using Wayland out of the box on 19.04 and 19.10 to get fractional scaling and independent DPIs on multiple monitors (Thinkpads of various ages with Intel GPUs). If it's experimental, they've certainly hid that well. It was just a login option on the display manager with no warnings about it during install or later.
Appreciate the experience report. But non-lts releases are by some definitions all "experimental" - I had the impression canonical pushed for Wayland in 18.04,but then walked it back a bit.
Hm, Wayland by default in 17.10, then back to optional in 18.04 - and so it might stay:
Not being experimental and being the default option are still two different things though. Even in 19.10, while it is installed as part of the default install without experimental warnings, it still isn't the default session option.
It is still a very slightly rougher experience than xorg - mainly due to some 3rd party apps not fully handling it yet. But the scaling options more than make up for it with me. One of those features (either fractional scaling or independent DPIs) was still regarded as experimental enough to require a CLI command to enable it though.
Most kernel devs have Intel processors, and anecdotally, it does seem like you see more AMD-specific patches coming in the changelogs as people with the chips get new kernel versions and find new breakages.
Another side effect of Intel's market penetration is that the Intel implementation of any given featureset is targeted first. Things like nested virtualization may work mostly-OK on Intel by now but are still in their infancy on AMD; for example, it appears that MS still blacklists AMD from nested virtualization. [0]
> and anecdotally, it does seem like you see more AMD-specific patches coming in the changelogs as people with the chips get new kernel versions and find new breakages.
You have to factor in how stagnant Intel's chips have been for many years. There's simply not much new stuff showing up on Intel platforms, and half of the new features are fundamentally incompatible with Linux anyways and thus will never lead to upstreamable patches. AMD catching up to Intel on feature support also necessarily means AMD is adding features at a faster rate that requires more feature enablement patches over the same time span.
1) You don't know if a given linux kernel/other software will work unless you test it ... for each future version
2) The firmware updates for Intel and AMD are different.
Additionally, the excellent Intel C compiler focuses on their own processors.
The above doesn't mean you can't choose AMD, but don't assume they're interchangeable CPUs.
Disclosure: I worked for Transmeta, whose entire DC was based an AMD servers. The reason was that Intel was a larger competitor for their code-morphing CPUs than AMD was.
Coincidentally, Linus Torvalds entered the USA on a work visa from Transmeta after DEC bailed on his job offer.
I bought CS22 at Transmeta's wind-down auction, which I will donate to the Computer Museum. Several large CPU designs during that era were verified on it because it was a 4 CPU Opteron with 64 GB RAM, and 32 GB RAM wasn't enough.
Aside from Apple's A-series, that was the end of Silicon Valley being about silicon. (Many of the chip engineers on my last project ended up at Apple on the A-series.)