Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There's 2 main valid reasons larger companies won't touch AMD for servers:

1) You don't know if a given linux kernel/other software will work unless you test it ... for each future version

2) The firmware updates for Intel and AMD are different.

Additionally, the excellent Intel C compiler focuses on their own processors.

The above doesn't mean you can't choose AMD, but don't assume they're interchangeable CPUs.

Disclosure: I worked for Transmeta, whose entire DC was based an AMD servers. The reason was that Intel was a larger competitor for their code-morphing CPUs than AMD was.

Coincidentally, Linus Torvalds entered the USA on a work visa from Transmeta after DEC bailed on his job offer.

I bought CS22 at Transmeta's wind-down auction, which I will donate to the Computer Museum. Several large CPU designs during that era were verified on it because it was a 4 CPU Opteron with 64 GB RAM, and 32 GB RAM wasn't enough.

Aside from Apple's A-series, that was the end of Silicon Valley being about silicon. (Many of the chip engineers on my last project ended up at Apple on the A-series.)



>Additionally, the excellent Intel C compiler focuses on their own processors

This is a new and creative use of the word "excellent" to mean Intel are so dishonest they have been caught out using their compiler as malware delivery to make /your/ compiled binary test for an Intel cpu when being run by /your/ customer and if it finds your executable binary being run on a competitor, eg amd, makes the code run every slow path despite the optimised code running fast on that cpu.

Wildly dishonest. Malware delivery mechanism are somewhat more traditional uses of the English language to describe the Intel compiler.

You cannot trust Intel. They've earned that reputation all by themselves.


Malware? Are we just redefining words when we don’t like something?

> malware (n)

> software that is specifically designed to disrupt, damage, or gain unauthorized access to a computer system.

How is a dispatch system (which GCC supports) malware? Yes, Intel “cripples” AMD by requiring an Intel processor, but it’s not malware.


It's sneaky, it behaves badly and counter to the user's interests, and because it's a compiler, it propagates that bad behavior (though not in a self-reproducing viral fashion). It's fairly mild on the scale of malware—I'd rank it slightly less bad than adware, but roughly on par with AV software that deletes any tools it deems to be for piracy purposes.


I call stealing your customers cpu cycles without permission for marketing purposes malware. If you don't that's ok. We can disagree.


I literally posted the definition of malware. Where is it gaining unauthorized access?


It says 'or'.

If it disrupts, that fits the definition you gave.

Or do you think a trojan that deletes your boot sector isn't malware?


Your definition isn't the only reasonable way to define the term, and you seem to be parsing it incorrectly anyways.


Seems pretty disruptive to my layman eyes to force code to run slower on a competitors hardware.


Oh for sure. I’m quibbling over the use of the word “malware”


> 1) You don't know if a given linux kernel/other software will work unless you test it ... for each future version

Huh? Sure, some software may break, but there's more than enough AMD out there to make sure that linux and other common software won't break.

> Additionally, the excellent Intel C compiler focuses on their own processors.

IME it's actually not that commonly used outside of benchmarking (among other reasons, it's fairly buggy - perhaps somehat of a chicken/egg issue).


Actually, if you want to run Wayland, and a more powerful GPU than Intel's integrated stuff, AMD has much better support, to the point that running Wayland isn't even an option on anything Nvidia (and Intel CPU + Radeon dGPU is relatively rare). Though I'm a bit confused about whether Wayland (as an experimental option) in upcoming Ubuntu 20.04 LTS is even supported. It should be, because X.org's universal trackpad driver sucks compared to what was available in Ubuntu 16.04, and overall gnome-shell feels clunky and a regression compared to Unity. Having just setup a ThinkPad E495 (Ryzen) over the weekend, I'm both impressed with easy out-of-the-box installation, but also seriously disappointed with gnome-shell and the state of Wayland that I'm considering alternatives to it.


> Though I'm a bit confused about whether Wayland (as an experimental option) in upcoming Ubuntu 20.04 LTS is even supported.

I've been using Wayland out of the box on 19.04 and 19.10 to get fractional scaling and independent DPIs on multiple monitors (Thinkpads of various ages with Intel GPUs). If it's experimental, they've certainly hid that well. It was just a login option on the display manager with no warnings about it during install or later.


Appreciate the experience report. But non-lts releases are by some definitions all "experimental" - I had the impression canonical pushed for Wayland in 18.04,but then walked it back a bit.

Hm, Wayland by default in 17.10, then back to optional in 18.04 - and so it might stay:

https://www.omgubuntu.co.uk/2018/01/xorg-will-default-displa...

https://www.phoronix.com/scan.php?page=news_item&px=No-Wayla...

I'm a little surprised, not default for 18.04 made a lot of sense, but I'm not sure why 20.04 won't see a switch.


Not being experimental and being the default option are still two different things though. Even in 19.10, while it is installed as part of the default install without experimental warnings, it still isn't the default session option.

It is still a very slightly rougher experience than xorg - mainly due to some 3rd party apps not fully handling it yet. But the scaling options more than make up for it with me. One of those features (either fractional scaling or independent DPIs) was still regarded as experimental enough to require a CLI command to enable it though.

So, not perfect, but good enough for me.


That's encouraging to hear - I'll give it a try.


Does Intel work without testing?


Most kernel devs have Intel processors, and anecdotally, it does seem like you see more AMD-specific patches coming in the changelogs as people with the chips get new kernel versions and find new breakages.

Another side effect of Intel's market penetration is that the Intel implementation of any given featureset is targeted first. Things like nested virtualization may work mostly-OK on Intel by now but are still in their infancy on AMD; for example, it appears that MS still blacklists AMD from nested virtualization. [0]

[0] https://github.com/MicrosoftDocs/Virtualization-Documentatio...


> and anecdotally, it does seem like you see more AMD-specific patches coming in the changelogs as people with the chips get new kernel versions and find new breakages.

You have to factor in how stagnant Intel's chips have been for many years. There's simply not much new stuff showing up on Intel platforms, and half of the new features are fundamentally incompatible with Linux anyways and thus will never lead to upstreamable patches. AMD catching up to Intel on feature support also necessarily means AMD is adding features at a faster rate that requires more feature enablement patches over the same time span.


That will change though.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: