Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Intel's Immiseration (thechipletter.substack.com)
185 points by rbanffy on Aug 8, 2024 | hide | past | favorite | 218 comments


The biggest problem I've seen with Intel is that they are "getting high on their own supply" and the whole tech press enables them with the sole exception of Charlie Demerjian.

They should pick a few random people out of the phone book and pay them handsomely for their advice, this way they might start to "see themselves as other see them". Most of all they need to recognize the phenomenon of "brand destruction" which has manifest in clueless campaigns such as "Ultrabooks", brands that should have been killed off years ago ("Celeron", sorry, low performance brands have a shelf life, there's a reason why Honda is still making the Civic but GM is not making the Chevette) and that they should write it in their bylaws that they are getting out of the GPU business permanently (they've made so many awful products it will take 15 years for people to get the idea that an Intel GPU adds value, instead it's this awful thing you have to turn off so that it won't screw up graphics in your web browser when you're using a Discrete GPU.) People might get their heads around the idea that an Intel CPU is a premium brand if it had a GPU chiplet from a reputable manufacturer like AMD or NVIDIA.

(Oddly when Intel has had a truly premium brand, as in SSDs, they've ignored it. Hardly anybody noticed the great 95% latency performance that of Intel SSDs because hardly anybody realizes that 95% latency is what you feel when your computer feels slow. Intel SSDs were one Intel product that I would seek out by name, at least until they sold their SSD division. Most people who've run a lot of Intel SSDs swear by them.)


A lot of these are symptoms of the root cause: a bad product. Ultrabooks aren't a terrible concept; they're the Wintel version of Macbook Airs. Making a non-Apple Apple device is a fine business strategy, and Samsung has made plenty of money off of it. The problem was that Intel chips continually crippled them, making them hot and slow with no battery life. People don't like Celerons and iGPUs because they all run at the speed of molasses.

Any of these branding decisions would have worked fine, if the products did. But no amount of marketing will fix bad engineering.


I don't think I agree at all with your assertions about ultrabooks and blaming intel CPUs.

To be honest, all x86 processors are reasonably efficient if you pick the right power points to run them at. My experience with my laptops is that windows will arbitrarily use 2-3x idle power draw while on battery for no apparent reason. There is no reason for a CPU to be running higher than idle the vast majority of the time in a laptop, for tasks that you would be using a laptop for on battery.

I think most laptops are just poorly designed and Windows isn't reliable about using minimal power. MFGs make big laptops with 60whr batteries (instead of 100wh), they put bloatware on them that consume excess power at idle, MS Windows doesn't reliably ensure low power consumption on battery (seriously, I have to hibernate/shut down my laptop when I stop using it, otherwise it will consume its entire battery in 8hrs during sleep mode, somehow), and so on. Most problems with Windows laptops aren't Intel's fault, IMO.

I agree that Intel really should clarify, at least, the difference between their "Celeron" class (e-core only) and "Core" class processors (with P cores), since there's such a massive difference in performance capability between them.


>Most problems with Windows laptops aren't Intel's fault, IMO.

Intel works very thoroughly with Microsoft, and it is absolutely on Intel for bringing forth the most inexcusable failure mode: Laptops "sleeping" with the CPU still on and proceeding to cook themselves in your backpack because they thought they were still on mains power.

Why is this on Intel? Because they removed a C-state that actually slept properly in favor of a C-state that theoretically saved more power but in practice almost never works with how people actually use laptops.

AMD might also share the blame here. I don't know, because I don't care to know further with how stupid everything is.


Agree on the sleep state stuff. idk who's fault it really is, but at least on the software front I mostly blame microsoft, because it's really their duty to deliver consistent, reliable power consumption on battery power.


Everyone involved does a pretty poor job imo.

The processors/chipsets seem to do what they're they're supposed to. However, it's far too easy for other parties to create a scenario that prevents the system from reaching deeper c-states.

Motherboards manufacturers produce boards that can't go below C3, despite showing up to C10 in the bios. The actual level of support won't ever be mentioned.

PCI devices, e.g. wifi cards, can prevent the system from reaching deeper C states entirely.

Putting devices into a PCI slot connected to the CPU lanes rather than the PCH can also prevent the system reaching the desired states. The CPU slot will frequently be the only choice.

Operating system defaults often prevent the system from reaching deeper c-states. Linux has been worse for this than Microsoft in my experience.


Anecdata here but I bought a thinkpad T14 in 2022. My colleague bought the same at the same time. We both have identical machines except for me opting for an intel processor and him picking an AMD. My laptop has heating issues and terrible battery life. His lasts an entire day and no heating issues. Forums research shows others in the same timeframe are facing the same issue as me so it is entirely Intel related.


I'd still be more inclined to blame a software issue. Do you have the exact same software installed on both?


Yes. I have bare minimum s/w on the windows 11 that came with it.


> My experience with my laptops is that windows will arbitrarily use 2-3x idle power draw while on battery for no apparent reason.

Preinstalled bloatware is a massive problem for the average consumer. My partner is constantly surprised how well windows runs on the computers we have in our house - she’s so used to windows computers running all sorts of crap all the time. Our media box is just bare windows, but with all the obnoxious stuff disabled. (Like start menu news items, game bar, telemetry and all that).

Most work computers running windows come with antivirus and rootkits like crowdstrike. And personal computers are usually even worse. I bought a brand new HP machine a few years ago which idled at about 15% cpu usage, even on battery. It got hot and ran itself flat in no time, just sitting there. Turns out it was running some bloatware hp audio noise cancellation plugin or something all the time, even when the microphone wasn’t in use.

Modern computers are crazy fast. But there’s not much power left over if you install clown car of crap on them.


The day I lost all faith in Windows was the day when one of their top guys said that dual-core CPUs will be great because they will be able to offload antivirus software to a core while users use the remaining core for using their computers.

No words on increasing security and reducing the attack surface of Windows. Just tuck that antivirus there, keeping the status quo while ripping users of the benefits of increased processing power.


> There is no reason for a CPU to be running higher than idle the vast majority of the time in a laptop, for tasks that you would be using a laptop for on battery.

True. I once got suspicious and opened the process list window. I had installed a server so I could stream to my Roku device, and I wasn't streaming, yet the server was sucking up quite a bit of compute time, all the time.

I uninstalled it.


I've bought i7 machines whenever I bought a new machine with my own money (so have my employers) but I've had various i3 and i5 desktops come into my possession and always been disappointed by limitations baked in. For instance I was hoping to install Steam & Proton on an i3 machine for entry level gaming but found it wouldn't talk to a 1030 card I had laying around.

On paper it looks like Intel machines from 2010-2020 have sufficient I/O bandwidth but when you look at the artificial restrictions Intel puts on i3 and i5 machines (unless you get some special motherboard with PCIe switches) it's best to assume that you won't be able to "get there from here" and that many seemingly reasonable configurations of on-board peripherals and add-on peripherals "just won't work."

It reminds me of the hassle it was to get expansion cards to work in the day of the "PC compatibles" that used the ISA and other early busses: back then you had to mess with jumper cables and which card went in what slot to resolve IRQ conflicts. It was well documented, PC builders all knew it, and I was always able to get it to work.

In the PCIe age though I have often been running into undocumented or poorly documented limits, PC builders who are totally ignorant of this (they always buy i7 and/or high end motherboards I guess.), etc.

Related to that in my mind is the equally obscure situation with USB 3 which is the laptops I've had have all had undocumented limits on how many USB devices you can plug in. In the USB 1 era it was documented that you could plug in 127 devices into a USB tree, the USB 3 doc doesn't guarantee anything! You would think you could plug your laptop into one or two USB 3 hubs with a lot of ports but my experience is that I'd start having trouble when I had 4-5 devices plugged in, then I plug in a mouse and either the sound card or a mass storage device gets unmounted.


> For instance I was hoping to install Steam & Proton on an i3 machine for entry level gaming but found it wouldn't talk to a 1030 card I had laying around.

this is absolutely a skill issue, there is no limitations about plugging anything into an i3 and in fact i3s are popular industrial processors and SOHO servers since they support ecc during the skylake era.

the rest of your comment all springs from the apparent issues you had installing a pcie card and doesn’t really say anything once that point is dismissed.


> My experience with my laptops is that windows will arbitrarily use 2-3x idle power draw while on battery for no apparent reason

same with my experience. My computer have a hardware issue recently, so I am on steamdeck. When I am on windows, the whole system stays on 50%+ load even I only have browser and discord opened. Playing games on it is so laggy that can't even hit 50 fps stably. But the same setup somehow works completely 60fps stable on steamos even it is literally the same hardware.

Windows is really optimized so badly for low power hardwares.


And battery life with most laptops (even the Framework) is pretty horrible on Linux compared to Windows without excessive tweaking (and even then).


Windows sleep mode is power hungry since the introduction of Modern Standby that allow for network connections even during sleep mode.


> will arbitrarily use 2-3x idle power draw while on battery for no apparent reason

You might be thinking about Linux? IIRC OS X wasn't particularly impressive about that back in the x86 days and high-end Macs had horrible battery life.


No, this is Windows. It absolutely can use minimal power, but my experience closely monitoring my laptops consumption is it will regularly spike from 5-6w to 15-20w for extended periods, which obviously obliterates battery life. I've been unable to figure out why.

My particular specimen came with a 97wh battery (XPS), so it would be able to linger on for ~10+hrs if it could maintain its idle power consumption with brief spikes to do work, but in reality it's basically always less than that in my experience (and worse, its inconsistent as hell about it).

The other issue is sleep states, which as another commenter mentions leads to laptops using all of their battery in your backpack so it's dead by the time you actually try to use it :'(

It's made worse by comparison to modern phones, which deliver reliable, consistent power consumption throughout the day - so a laptop that regularly lasts a fraction of the time and doesn't know how to sleep just looks awful by comparison.


I don’t think I have ever had a Windows laptop last 10 hours without a consistent effort to keep it in low power modes, keeping screen brightness as low as I can, and actively keeping the number of running processes and background programs as low as I can.

Sure 10-15 years ago, we were lucky to get 3-4 hours of battery life so getting 8-9 is great but they advertise these things as having upwards of 20 hours.

Linux not coming close to that is “acceptable” because they aren’t advertising it constantly as the reason you need to buy the latest laptop or get onto Windows 11. Apple is more honest in this regard and it really is the vertical integration that allows them to control the entire system.


While the Intel laptop CPUs may have been not good enough, that is not where Intel has lost money.

The current financial results have shown decent profits for the consumer CPUs, which could have been greater if Intel had succeeded to produce more Meteor Lake CPUs. Intel could not produce as many Meteor Lake CPUs as the demand because their new Intel 4 manufacturing process has low production yields.

While Intel lost money in their foundry, that was unavoidable due to the high expenses needed for catching up with TSMC.

Where Intel had great losses was in server CPUs. The reason for these losses must have been that Intel had to grant very large discounts to the big buyers of server CPUs, in order to convince them to not buy superior AMD server CPUs.

These losses in server CPUs were easily predictable, because even if Intel has succeeded to reach in time all the publicly announced milestones of their roadmap, their roadmap itself does not hope to reach parity with the AMD server CPUs before H2 2025 (and that assuming that AMD will not introduce Zen 6 during 2025, which would move the target).

It looks like Intel has made a PR mistake. Even if their public roadmap has shown all the time that they will continue to have big losses for at least one more year, that was obvious only for technical people, who could compare the future Intel products with the AMD products, because Intel has not said any word in their roadmap about what the competition will have at the same time. For non-technical people, the Intel press releases sounded more optimistic than they should have been, leading to disappointment caused by financial results that should not have been surprising.


> The current financial results have shown decent profits for the consumer CPUs,

Only after serious financial alchemy. The client group has a decent op margin, but the foundry group has an absolutely terrible one, and the client group sells products that they buy (internally) from the foundry group at arbitrarily set prices. What would the client margins be if the internal price they pay for their products was set so that the foundry breaks even?


iirc the reason for that split in margins is precisely because Intel started pricing their foundry internally as-if they were external sales (i.e. made at competitive prices).


Are client margins different for Lunar Lake, where the compute/platform tiles are made by TSMC?


bad product is a symptoms of a even deeper cause... Lack of competition. The decline of intel started long ago. The financial results here are just very very lagging indicator of that. The lack of competition combined with very agressive anti competitive practice allow them to survive on pretty bad product line. In a healthy market, competition would have force intel to improve well before the point we are now


They are only moving because AMD is finally providing that competition, but it's not fair to say that "if only AMD had gotten their act together sooner, intel would be a better comapny."

Ultimately, intel chose to let intel rot while AMD was out of the picture. (ignoring that intel's backroom deals with Dell and co. are a big part of what pushed AMD out)


AMD and ARM-based solutions too. I can't picture wanting to run Windows on ARM for a few product generations but for server use I'd have no problem w/ Linux on ARM.


ultimately it was a focus on marketing that led them to kill engineering

(if itanium and the many failures and lucky breaks that led up to it hadn't already convinced you that intel might never have had good engineering)


Intel tried to get away from x86 over and over again with radically overengineered architectures that didn't work (as well as mainstream architectures) and failed:

https://en.wikipedia.org/wiki/Intel_iAPX_432

https://en.wikipedia.org/wiki/Intel_i860

https://en.wikipedia.org/wiki/Itanium


> Ultrabooks aren't a terrible concept; they're the Wintel version of Macbook Airs.

I'd say both are a terrible concept. I mean, it sells, but it's still terrible. A laptop being thin is not really such a virtue and people's life would be better with a decent-travel keyboard, better heat dissipation and more ports than with 5-10mm less laptop height.


> people's life would be better with a decent-travel keyboard

Most people don't generally seem to agree with that, though (not explicitly, they just generally don't prioritize things like that)


Heat dissipation and lots of ports are why I have a desktop. I don't want that from my MacBook Air. A fan isn't necessary or desirable for a machine that is primarily used for web browsing, watching movies, and light gaming.


For some of us, a laptop is essentially an easily movable desktop. It's not portable as in "I want to travel with it", it's portable as in "I can easily take it anywhere within my house/office".

Those kind of "movable desktops" benefit from performance, additional ports, etc, and instead have little use for thinness or less weight.

I simply cannot see myself ever owning a bulky, noisy desktop ever again.


The desktop replacement segment of the laptop market exists, but does not undermine the validity of more portable machines or make a fanless laptop a terrible idea. The viability of desktop replacement notebooks as desktop replacements is in large part due to the efficiency gains driven by the demand for the other kind of laptop—the ones that are thin and light. A 5lb+ slab of a notebook is still a pretty constrained form factor compared to the thermals and acoustics of a mini tower desktop.

I can understand you characterizing desktops as bulky, but calling them noisy in comparison to desktop replacement laptops is ridiculously wrong. Small fans with restricted airflow are what makes computers noisy, and they're much easier to avoid with a desktop form factor. Your laptop cannot sustain desktop performance without getting loud, and if it doesn't get loud under sustained load it's because your laptop is choosing to instead get slow.


My argument was actually focused on "I want to travel with it". But - I travel with it to places where I need to connect to to things: A monitor in a hotel room; a USB device of a friend or host; a physical network in an institution or company or a home I'm visiting. That's what the ports are for.

As for the keyboard - I absolutely need a decent keyboard to type, traveling or no.


> it's portable as in "I can easily take it anywhere within my house/office"

As I grow older, that is becoming less and less true. Unless it’s for quick tasks, I prefer my office chair, my big monitor and my comfortable keyboard. Laptop are still great, for ad-hoc computing and traveling, and I’d prefer a less power-hungry device for these.


I've recently started using a fanless laptop as my primary day to day device, and 100% agreed I'm never going back.

I'll give up a LOT of performance for silent and long battery life, because I find waiting a second occasionally much less annoying than a fan blasting randomly, or my legs baking. SSDs have largely resolved the perceived UX latency issues that used to occur, CPUs and GPUs now are far more powerful than needed for everything but specialized stuff (compiling, encoding, stuff that intentionally maxes out the machine. Regular applications do not do this, on purpose, but they do regularly bog down on I/O).

Though the screen on this thing is awful.


You know what I want for my desktop, is a battery, so my computer doesn't reboot when I lose power for 5 seconds. Yes I know about UPSes, but they're $100+ and unwieldy. I just need 30 seconds grace period to save my work!

This is the main reason why I probably won't get a desktop again.


Amortized cost of a desktop is like 1/3rd that of a laptop of comparable performance. You're willing to pay thousands more because you don't want to pay for an UPS? This doesn't check out.

The space argument makes more sense and from that I assume you're in a space constrained apartment.

At the end of day I'm more curious what causes those intermittent power outages. Are you regularly tripping breakers?


It happens twice a year, it goes out neighborhood-wide during a storm or an animal chewed through something.

Anyway I don't know where you get that "amortized cost" number. They seem a lot closer in price than that to me.


Depends on the class of performance you want. I think at the low end laptops are surprisingly cost-competitive with desktops, but if you want high-end laptops either don't exist or are hilariously expensive.

Also depends on the type of laptop. A desktop-replacement laptop with extremely basic frame and screen (ie. cheap and shit) can have excellent specs at a competitive price, whereas an XPS or mac with a great screen, aluminium frame etc will be very expensive relative to the specification.


This. Additionally desktop parts last longer and can be upgraded individually. They also have good resale value, offsetting the cost of upgrades (especially true for GPUs).


True even for the mini form factor (the 1L desktop that uses laptop component). As you do not have to stay close to it, you have a great experience out of the box. I could use a chromebook for what I still use the laptop for (browsing and ssh from the sofa)


Silly and dangerous idea: open up a power supply, study it thoroughly, then connect a 300 volt battery bank in parallel with the DC link capacitors.


The UPS is a battery...


I would have agreed with you a couple months ago, but picked up an air for travel so I could leave the 15” MBP at home and I’m really enjoying it and appreciating how thin and light it is. Wouldn’t want it for home or office use but for occasional on the go use? Perfect. And I imagine for many it would suite their needs pretty well


Thin and Light are two independent variables, you can make a lightweight PC which still has good keyboard, thermals, repairable and ports by making it few mm thick. Its utility on the go comes primarily from weight and not thinness.


I travel frequently. Thinness is absolutely a significant mobility factor.

I actually hate thinness for the sake of being thin, I want practical amounts of USB-A ports and a god damn headphone jack. But a thinner laptop is easier to stow in my luggage and handle in space-limited environments like a hotel room desk or a seat table in an airliner or lounge.

Likewise, I don't want a chungus portable phone from 1993, but I also want my smartphone to be thick enough to reasonably grasp and have a headphone jack.

This is also putting aside the fact that most people in general clearly prefer thinner and lighter computing over thicker and/or heavier.

There are times and places for a bigly desktop replacement tabletop heavy enough to crush your kneecaps and hot enough to burn your skin, but most people don't need that.


Sure but thin and compact are also good for travel as it can slide into bag pocket easy or even a large purse and not take up much room.


The new MacBook Air design is really fantastic.


Sorry, but both of these are, IMHO, irrelevant. You can't connect them to (almost) anything, and they don't have a decent keyboard to type on. I might as well use an etch-a-sketch or something.

Speaking of being light, though - a Lenovo X220 is pretty light. With today's tech and the amount of money an AirBook goes for you could reduce the weight much further.


I don't really care that much about brands nor laptops for that matter, but my M1 Air is really, really nice given the money I've paid for it. Why should I burden myself with carrying a big laptop thing when I can instead carry a Macbook Air that is much lighter and which does very good job?


There is nothing wrong about the concept of ultrabook. Just terrible implementation.


The implementation is left up to the OEMs but they seem to have done pretty well. Thin and lights are in a pretty good spot overall.

Have people forgotten how bad laptops were in like 2010? Great big chunks of plastic that somehow still feel cheap and weightless, keyboard flex, mushy keys. They were really quite bad.


That's wrong and the market has proven it. The M3 MacBook Air is the best computer for most people - period, not just best ultrabook or best laptop. There are almost no compromises for 99% of people's usage.


Well, I mean, if you ignore the 8GB RAM base model.. and the pricetag at 2-3x a similarly-performing windows equivalent, and probably +50% on similar screen windows equivalent.

I can make a great computer that meets everyone's use cases for 2x the going price too :).

Unfortunately it is far too easy for a non-savvy person to walk into a store and buy a piece of shit windows laptop for the same price as a macbook, because they don't know what they want or need and the salesmen aren't that interested in making sure you get good value :'(


> and the pricetag at 2-3x a similarly-performing windows equivalent, and probably +50% on similar screen windows equivalent.

Can you actually list any of these "equivalents"? I'm actually genuinely curious

Just to get this straight you're saying that you can get a Windows laptop that's just as fast, has comparable battery life and build quality as an M3 Air for $400-650 (i.e. 1/3 - 1/2 of the cheapest 16 GB Air)? Really?

> Unfortunately it is far too easy for a non-savvy person

So could you help those of use who are not that savvy by being more specific?


>Can you actually list any of these "equivalents"? I'm actually genuinely curious

I paid ~780 Euros a year ago for a 14" Lenovo with an 8 core AMD Ryzen 7840HS, 32GB LPDDR5X and 1TB NVME, and 2560x1600 IPS display. I can't justify spending double the Euros for a Mac with less RAM and storage. I just can't. Now you can even get laptops with OLED screens for that price.

Oh and best of all, the screen can open almost 180 degrees meaning I can prop it up higher on a portable flip stand for better neck ergonomics and still view the screen at a 90 degree angle VS MacBooks which can only open ~135 degrees reducing the positions at which you can use the machine as the moment I prop it up, the screen faces my chest instead of my face while opened all the way.

Macs may be technically superior on paper but they have all these design quirks and limitations because they only want you to use the products in this very specific way that one design guru in Cupertino though of is the right way to use a laptop while looking cool, and I say screw that, I'm the customer, I should be able to use it however I think is best for me.

If I were doing stuff like photo/video editing for a living or developing iOS or MacOS apps, a MacBook would totally be worth it due to it being better for those tasks (or the only option), but for my own use cases of web + Windows + Linux + light PC gaming, Apple's products make no sense to me especially given their prices in Europe compared to Col and wages.


From my recent purchase decision:

14 macbook pro, 32gb of ram, 1tb of storage, 120Hz high res sceen: 2900€

HP Elitebook, same specs, same weight, etc.: 1200€

https://www.campuspoint.de/hp-elitebook-845-g11-sondermodell...

Surely, the macbook pro will be better in some ways but at that price difference "better" is just not good enough.


A Macbook Air M3 is significantly faster than this HP Elitebook in for example compilation tasks.

I don’t know about all of you but in my day to day work I compile code very often, and I used personally a work issued M2 Air and a work issued HP Elitebook Raptor Lake. The M2 Air was leaps faster, leaps.

Some older and more traditional colleague who had a Alder Lake workstation laptop was surprised how much faster the M2 was to his machine.


The M3? No, the opposite. It is pretty much a wash with the M3 Pro and for 3x the price this is not at all "significant".


I don't think you could find a windows laptop with comparable build quality (or display quality, which I consider important if you intend to use a laptop for its intended purpose) for 1/2 the price of the air. I do think you could find a laptop in the same performance ballpark (ie. 75%+ of the single core speed, which seems to be where the newest gen Intel mobile chips sit in comparison). This is a bigger difference than I expected, admittedly, though the gap closes if you care about multicore performance numbers.

Here in AU, a 16GB air with 512GB SSD = 2400AUD. The base model (8/256)) = 1800 AUD.

I would consider an appropriate comparison model to the air to be something like this: https://www.harveynorman.com.au/acer-swift-go-evo-14-inch-co...

For 1/2 the price, you get a laptop with an OLED screen and one of the newest/fastest non-M3 processors you can buy. It also happened to be the cheapest laptop with a new Meteor lake processor I could find, though I'd have to imagine there are laptops with worse specifications in other areas for the cheaper.

You can obviously spend up from there to improve build quality, get more SSD space (probably by swapping out the stock NVMe drive), more RAM (if you're lucky maybe they have SODIMMs, though I'd guess most laptops, especially non-16", might have soldered memory).

In terms of non-savvy vs savvy, mostly it's that a non-savvy laptop buyer, generally speaking, doesn't understand:

- Screen quality/tech. Resolution, TN vs IPS vs OLED, matte vs gloss, maximum brightness

- How different processor generations compare. Especially walking into a store, it's common in my experience for laptops with 5500u (Zen 2 6 core) to be sitting next to a laptop with a new eg. 125H, both at similar prices. The newer processor is the pick (performance wise), but this is completely nonobvious. It's particularly bad at lower specs, where you can have an i3 N300 (8 e cores) next to perhaps a Ryzen 5700u or Intel 13600h...

In theory salespeople should lead them in the right direction, but likely as not they can spend twice as much on a fancy laptop that looks schmick (and probably is) when they could get 90% of the same experience with something like the Acer above. eg: https://www.harveynorman.com.au/hp-envy-x360-14-inch-ultra-7...

This is admittedly a 2in1, and it has a better processor, but it could well be worse in every other way.


The market? I don't know a single person personally or professionally with a MacBook Air. I haven't seen one in person in like 7 years.


Anecdotes don't really cut it though. I see MacBook Airs all the time, even had a MBA M1 that I handed down to my partner when I got a MBP M3, she uses it both personally and professionally. Two other friends have MBAs, a third just bought one yesterday.

At cafés in Stockholm I'd say 50% of the laptops I see people working with are a version of the MBA. At local universities it's an even higher percentage.


> The M3 MacBook Air is the best computer for most people.

Maybe, that it doesnt imply that it's the best because it's an ultrabook, or that a non-ultrabook version of the macbook air (not the macbook pro...) wouldn't have been better.


To be fair from the perspective of most consumers what downsides does the Air have that are only there because it's an "ultrabook"?


Who are you to say what would make people's lives better?


An actual computer user who's probably at least as qualified as whoever made these decisions at Intel.


I'm an actual computer user and I disagree. Now what?


Now, hopefully, we all come to a simple realization: opinions vary.


But don't they need to get into the GPU business?

Surely GPUs or similarly very parallel machines for things like ML training are very needed and will remain very needed. Seeing as firms such as Groq have done fine, surely Intel can do something of that sort without much difficulty?

Since their GPU business has been unsuccessful perhaps they can go for whatever makes sense, as there's nothing of this sort that they can release that will compete with their own products.


The AI boom is, in theory, a godsend for Intel and AMD. You can focus on creating good tensor computation hardware, without having to worry about getting gamers on board. No need for "Game ready drivers" or compatibility with complex legacy graphics APIs.

Of course, there's the elephant the room for general purpose tensor machines, which is CUDA - famously closely guarded Nvidia. But with the new wave of "above-CUDA" APIs like TensorFlow, PyTorch, and Keras, there's an opportunity to skip the CUDA layer altogether.


The AI boom that lead to Nvidia's market position is still, 3 years on, almost entirely speculative. The only products that are currently driving value (i.e., that are providing consumer surplus, for which businesses and individuals are actually opening their checkbooks for) are paid chat-bots.

Nvidia ate the whole pie because they were in the right place at the right time with shovels to sell. Getting into it now, in house, would be a huge bet that the pot of gold will be there when they get to the end of the rainbow -- and they have a long way to go.


It's off topic, but congratulations on the triple mixed metaphor. Not sure whether you use the shovel to dig up the pot of gold to buy the pie, or sell the shovels to bash through the pie crust and get to the rainbow....


Intel actually has it even worse because they completely failed to bring ARC to market in time. The product came out (very) late, underdelivered, and permanently damaged the brand reputation. Yes, the drivers and thus the cards have gotten better, but who's talking about them now? Noone.

They say the opposite of love is indifference, and basically everything Intel has sold outside of CPUs and NICs have suffered market indifference. Actually, maybe even the NICs given everyone seems to prefer Realtek NICs over Intel NICs these days.

Nvidia and AMD have it way better because if the market doesn't love them, you bet the market will hate on them instead of be indifferent.


Realtek WiFi NICs are dogshit. Intel AX210 shine where the others are slow and unreliable.


The whole situation with Intel and AI is also baffling to me. They have an excellent product in this space - Gaudi2. Faster than A100 and very attractively priced. I’ve tried it, it works fine. Gaudi3 is also about to come out, and it’s twice as fast as Gaudi2. Yet nobody is buying. I get why you wouldn’t want this for training - it requires some minor code modifications and your Triton kernels are worthless. But for generative inference this is just what the doctor ordered.


Gaudi was made by an Israeli company Intel acquired in 2019 (not an internal project). Gaudi 3 has PCIe 4.0 (vs. H100 PCIe 5.0, so 2x the bandwidth). It's strange for Intel, of all vendors, to lag behind in PCIe. And Nvidia has SXM for larger models (5x bandwidth).

"N5, PCIe 4.0, and HBM2e. This chip was probably delayed two years." -wmf


PCIe doesn't matter - these accelerators talk to one another, and Gaudi is outstanding in this regard. HBM2e also doesn't matter if you run decent sized batches (which you should, for throughput). In fact, HBM2e, being far less supply constrained, might even be an advantage.


Besides that it does matter (e.g. loading/saving LLMs), it speaks how Gaudi is not even using Intel's latest technology. They are clearly not in the main production pipeline. So Gaudi is not going to save Intel.


But you don’t “load” models during inference. They are already on-device.


> generative inference this is just what the doctor ordered.

Naive question, wouldn't you need a descent tool chain for inference as well ?


Assuming you mean “decent” toolchain, it’s actually pretty decent. Could it use some polish? Yes. But any decent ML engineer would be able to get a high performance server (or a batch job) running in a relatively short time. Or in almost no time at all if using a lot of the FOSS models. You just basically create a model in PyTorch and then hand it over to Gaudi stuff which patches it with optimized, Gaudi specific ops and converts things into an inference graph.

“Closeness to CUDA” is less important for inference because all the experimentation is already done by then, and if need be you could just implement the model using Gaudi ops to begin with, in a span of a few days including tuning and debugging.


AI doesn't exist with the right amount of collected data, we're going to hit a wall very soon where we just get bad AI data constantly because we are throwing data to the wall and hoping it sticks. Businesses should not be investing in AI right yet maybe 2-3 more years.

The businesses that should are the startups and businesses who are helping define the model, like ChatGPT and Microsoft, but the adoption is still too early. If I was McDonald's or Wendy's and I wanted to use AI to help promote sales to customers I would need to be able to grab demographic data which may not be appropriate PII data.

All of the lawsuits happening right now for data collected without permission of the data provider is going to all change the landscape.


I think there is still a good business to be made taking the AI stuff we have right now and making it highly performant.

Stable diffusion models are awesome and super useful. I don't necessarily mean the simplistic text->image stuff (though it's clearly useful)-- but denoising, in painting, animation from stills, 3d from stills.... style changes, etc.

LLMs are at least somewhat useful.

We can radically improve image and video compression with performant enough AI.

Plenty of other applications of AI right now are already useful or would be useful if inference were made more efficient or more local (thus private). No more data needed.

You don't need some grand vision of an AI future to get a total addressable market for high performance AI that is in the same ballpark as high performance GPUs for non-AI usage.


But for every business the chase seems to fail in finding worth. I can already tell you from where I am working we are struggling because eventually data is too old to be good or too incomplete. We need at least 3-4 years of collecting and storing the information to make full use of AI.


In practice it's the other way around.

In practice AMD and Intel GPUs should be suitable for machine learning but the software story isn't good. I know I can buy a NVIDIA card and know it's a good investment because everything worth with CUDA and be training models in less than a day. If I went with some other brand I'd expect to put 6 man*months into figuring out the software story, and that's a lot more than the price difference in the cards.

I've been wondering about the soundness of Intel's software strategy in that OneAPI ("One" is a bad smell in marketing speak, the only premium product in my mind that has "One" in the name is "Purina ONE" pet food, the XBOX ONE is an astonishing own goal of a name because you'd never get your mom to understand that an XBOX ONE is better than an XBOX or XBOX 360) is based on OpenCL and the one thing I know about OpenCL is that I don't know anyone who likes coding for it. The idea that you could code for FPGA and GPU out of the same API also seems delusional because FPGA is (mostly) about latency and GPU is about throughput. That is, FPGA can do certain small operations insanely fast and avoid the overhead of 16-bit math when 13-bit math will do, GPU is all about doing large operations in bulk.


> If I went with some other brand I'd expect to put 6 man*months into figuring out the software story, and that's a lot more than the price difference in the cards.

Several companies are spending >$10B annually on AI compute. There are a lot more than 6 man-months of savings in it for them...


One firm saving $2B saves $2B, open source software that opens up the market for everyone is priceless.

A company like OpenAI that is mostly concerned about pace might see going with the #2 GPU vendor is risky, even if it has the possibility to save money.


OneAPI is an implementation of SYCL. Apart from being standardized by Khronos, SYCL and OpenCL don't have that much in common, except that they are often interoperable due to many SYCL implementations running on top of OpenCL. Intel specifically has two backends, one for OpenCL and one for the lower-level Level Zero API.

Saying OneAPI/SYCL is bad "because OpenCL" is the same as saying OpenCL is good "because CUDA" (since the NVIDIA implementation of OpenCL is called CUDA and uses the larger CUDA stack).

As for the FPGA/GPU code sharing, iirc FPGA vendors including Altera/Intel and Xilinx/AMD support OpenCL as well. The idea doesn't sound that crazy to me: GPU code also often uses small data types (fp16, tf32, int4), and both work with data parallelism. GPUs operate in parallel with SIMD vectors, but a compute accelerator FPGA achieves a similar thing by essentially pipelining the program. Effective pipelining requires parallelism too, since a serial FPGA program only has one stage active, similar to how a serial GPU program only has one ALU per compute unit active. As such both are mostly suitable for massively parallel programs.


>"One" is a bad smell in marketing speak.

Thank goodness other people recognize this too. Whenever I see “One” in branding I sense a corporation bereft of creativity and afraid of risk. Alternatively, it might also indicate that were too many executives involved and no single person had the authority to make a decision, so they picked the most bland and uninspiring option they could all agree on.

It’s so lame and milquetoast. However, there is one exception I will make, the radio station 101.1 “The One”. Here it feels appropriate and even a little endearing.


No there isn't. Let's take pytorch as an example.

CUDA is an API. OpenCL is an API. Pytorch is not.

Pytorch is very firmly GLUED to CUDA. It will probably NEVER support anything else beyond token inference on mobile devices. The only reason Pytorch supports AMD at all is because of AMD's "I can't beleive it's not CUDA" HIP translation layer.

OpenCL is a real cross-platform API and with 3.0 it's finally "good" and coincidentally, intel is....half-heartedly interested in it, except they're shooting themselves in the foot by trying to also cover useless CPUs for inference/training and spreading themselves too thin (OneAPI). Because all intel can think about are CPUs. Everything must drive sales of CPUs.

At this rate just about the only think that might save us from CUDA is rusticl. If a real, full-fat, high-quality openCL 3.0 driver sudently popped into existence on every GPU platform under the sun, maybe pytorch et al could finally be convinced to give a shit about an API other than CUDA.


Intel getting into the GPU business is good, for sure and I hope they don’t give up on it.

It is a tiny thing, but Intel really seems to charge a premium for PCIe lanes. Maybe if they sell a lot of dGPUs they’ll be less stingy with the sockets to plug them in to…


I have an Arc A770 and it seems to work fine so far... though I've only had it for a month. There are some tolerable teething problems: I can't set the fan curve from Linux so I had to duct tape some extra fans, and it's incompatible with BIOS boot (requires UEFI) which should be considered acceptable in 2024. For presumably the same reason, there's no graphics output in early Linux boot until the GPU driver is loaded.

The IGPU in my previous machine's i7-6700K (Skylake) was also just fine. Intel Graphics Media Accelerator, yeah that really sucked, but that was like 15 years ago?


The main issue with ARC seems to be driver support in specific games, DX11 being particularly problematic.

Intel basically have to catch up on the ~10-20 years of kludges that AMD, Nvidia, and game devs had already implemented when the games were released.

They've been making strong improvements to their drivers though.

Their iGPU is great for home media servers. Low power draw, QuickSync can handle multiple 4k transcodes, and one less part to buy.

iGPUs still aren't a great choice for gaming on desktops, even the latest AMD APUs perform poorly in comparison to the cheaper dGPUs.


My Skylake iGPU ran everything on my PC just fine for me. That ranges from Minecraft to (apparently the most graphically intensive game I own) The Talos Principle. Sure, not always at the ultra high settings, but that's not something I care about.

Once I upgraded recently to a Threadripper I threw in an old GTX 760 because Threadrippers don't have iGPUs. The 760 also did fine. Now I finally have a bleeding-edge-ish Sparkle Arc A770, but that's only because I wanted to run a shader coding event and didn't want to force other people to care about the server having a behind-the-curve GPU.


Intel has always been an echo chamber of fiscal earnings and cutting corners to appease investors. For the longest time they stagnated on Mac hardware improvements because it would cost money, they would not have lost Mac as a business customer if they continued to innovate.


Also they should have had a policy of "not one transistor for the national labs" and "not one transistor for hyperscalers", particularly because anything they do to appease hyperscalers just saves them money that they'll spend on their custom silicon transition.

There's something prestigious about HPC but the story of how commercial data processing went parallel in the oughts shows how out of touch the HPC community is.

In the meantime Intel has botched the deployment of SIMD, slowplaying to the point where maybe 7 or 8 years from now some mainstream software might support AVX512, maybe. Somebody should be tarred and feathered for introducing the idea that features should be fused off for all but the highest paying customers. If the customer is not paying for the fused off functionality, the shareholders are. It sounds really ruthless and avaricious and might impress some business analysts as a form of vice signalling but it's pure waste.


Right it is wasteful. The reason some businesses are so successful is that they look at the common denominators rather than trying to perpetuate worth. Fewer variants and don't wall off features. AMD did this for the longest period getting back on it feet, but I am seeing a failure of them continuing down this path. They need to stop making variants and focus on 3 or 4 sets of consumer processors.


Yep, “too many SKUs” killed off the consumer HDD a few years faster than it had to die. If these companies had focus groups that reached out to ordinary people they’d realize it sounds absolutely incredible to most people that you need one hard drive for a 3-5 bay NAS, another for a 6-9 bay NAS, another if you are recording from video cameras.

Even if there was a real benefit from optimizing firmware and saving a few cents with cheap washers for drives in a low vibration environment there is a much more certain cost in that you have to qualify the firmware, risk introducing errors, etc. I can imagine that Best Buy might have been able to stock one model of high capacity HDD but no way were they going to stock more colors of WD drive than there are on the rainbow flag. That lack of product sense locked manufacturers out of the retail market.

(Makes me think of the cringe monthly hard drive roundup in Anandtech where an enterprise drive produced in huge numbers in a single SKU would usually be $80-100 cheaper than prosumer drives that, according to the spec sheet, consumed about 0.5W less and were about 3db quieter… And I have plenty of those enterprise drives still spinning after all these years, the only time I think about the noise is when the machine boots and the drives emit a throaty chirp that makes me think of a Ferrari spooling up.)


> " Somebody should be tarred and feathered for introducing the idea that features should be fused off for all but the highest paying customers. If the customer is not paying for the fused off functionality, the shareholders are."

This is very well put.

While I've always understood the economic motivation for margin optimization and the technical reality that CPU wafers aren't very granular, fundamentally, it's a long-term losing idea to intentionally fab gates that don't add end-user value. While it may work in the shorter-term, when it becomes the plan of record (as opposed to fixing a one-gen design or product mix error) it signals a shift to prioritizing value extraction over value creation.


Even more simply, their goal stopped being making the best product at the best price. Features that were fused off could have made their products better for no extra manufacturing cost. Your business is broken when you view your products as being too featureful as a cost.


that’s silly. Everyone in this industry fuses off features, AMD included.

Ryzen Pro APUs have disabled ecc support for example. Consumer RX 7000 gpus have crippled DisplayPort feature support/bandwidth. Consumer Hawaii had gimped FP64 to segment the pro cards.

At a certain point these practices are just so broad and banal that it’s unremarkable.


Current investor strategy is immediate gains no long term investment because that is deemed risky. Because they want to buy in and bail when they've doubled their investment.


Incidentally that is exactly the reason why ECC memory isn’t standard. Intel reserved it for only the highest paying customers.


"they would not have lost Mac as a business customer if they continued to innovate"

I'm not entirely sure this is true. I mean, I guess it depends on whether one expected Intel to be able to make not just a decent chip for PCs/laptops, but also one for phones & tablets. Once Apple started dabbling in its own chips for the iPhone and iPad, it seemed inevitable that they'd expand that to their macOS systems too. Apple has been a rough customer to please for chip designers/manufacturers, I'm not sure any company could've satisfied them with a general-purpose chip.


If they played their cards right as the top dog ~20 years ago, they could have perhaps ended up with the relationship with Apple that TSMC does today.


1. Intel didn't have big fabs 20 years ago, so they didn't have anything to offer Apple that would resemble TSMC's partnership.

2. There's no money in designing licensed CPU cores as a middleman for a company that drives margins so low they're associated with suicide nets and forced labor.

3. If TSMC is lost due to Chinese aggression (which is a non-zero chance), Apple is left fabless and has to choose between importing Samsung silicon at-cost or partnering with Intel.


> Intel didn't have big fabs 20 years ago,

They had leading edge ARM chips though. If Apple used XScale (i.e. the default choice had Intel not decided to get rid of it due to "reasons") for the iPhone that would have significantly reduced the likelihood of them developing competitive chips themselves.

Also 20 years? Samsung was still making Apple's chips back in 2014.

> 2. There's no money in designing licensed CPU cores as a middleman for a company that drives margins so low they're associated with suicide nets and forced labor.

Is Apple significantly or at all worse than their competitors in this regard? In any case only reason Apple is designing their own chips is because there weren't any decent options for their use cases available on the market (and Intel only has itself to blame for that...)


Might take them a few years to update their software to run on the Samsung stack.

What are they going to do without all the auxiliary processors they build in?

Or do they have a team which has their software working on other people's silicon in case their supply chain get blown up?


Also, importantly, how hard will the US government be taxing imports and propping-up Intel? I think that an anti-Apple and pro-Apple administration would both want them to stop shipping jobs offshore. There's a tariff knob that can be adjusted until only Intel (or GlobalFoundries, lmao) are feasible choices, and if the US wants their Intel investment recouped then they may well push for that kind of deal.

Personally, I find the "samurai's honor" shit where one company avoids another to be childish and stupid. If you make Apple's board choose between ending their grudge-match or blowing up their margins, I don't think they'll care much either. Or maybe I'm wrong, and the 2027 iPhone is manufactured with 24nm wafers intended for the Toyota Prius.


Companies that are built on engineering eventually become subsumed by soul-sucking profit sucking bonus sucking middle management types that just plop themselves in their big huge hierarchy and destroy the company long-term with inertia and apathy.

See also. Boeing Medtronic GE AT&t


Having a third competitor in the GPU is great, for both Intel and the consumer. If one starts blaming newcomers for their shortcomings then we are failing to see the bigger picture.

When you talk about web browsers failing to display content because of the iGPUs I fail to even have heard of such a case. Maybe the performance is bad but that's it. Either way, that's more of a failure of the OS and not Intel. Windows is particularly biased to not use your dGPU on a laptop, but I blame the kernel for making badly calculated assumptions based on nothing but !={game, 3D modelling tool} then iGPU, and not the hardware.


The CEO had only been in charge since 2021 bringing an engineer back to the helm. How long does it take to bring a big ship like Intel around and undo decades of internal rot? It will be interesting to see who is let go in the coming months.


Though prior to Gelsinger, Youtuber and chip leaker / rumor monger "Moore's Law is Dead" mentioned that Jim Keller's stint at Intel was so short (April 2018 - June 2020) because of internal cultural toxicity. In particular, things like leaders of major groups in Intel trying to get employees of other groups fired, so as to make their own relative progress look better. Take with a pound of salt of course, but MLiD's sources have been pretty decent.

You only resolve that kind of badness by firing a bunch of SVPs and VPs and Gelsinger hasn't done that, for many possible reasons.


>Youtuber and chip leaker / rumor monger "Moore's Law is Dead"

He might be right but that youtuber has a poor track record. He's just constantly pushing out rumours for the sake of content farming. Some of the rumours are right, some are wrong as hell.


I knew Intel was a dying company when Jim Keller quit in disgust. The man is pure productivity, a relentless optimiser that just wants to build great things. Intel broke him and he ran away screaming.

PS: Microsoft demoting Jeffrey Snover (inventor of PowerShell) is in the same category of a business decaying from the top down. He quit too, and is now working for Google.


> How long does it take to bring a big ship like Intel around and undo decades of internal rot?

If it takes that long, then this is your actual problem, and not how many engineers are in the loop at the board level.


>If it takes that long, then this is your actual problem

Most IC design life cycle are 5+ years. Expecting to turn over in less is part of the problem intel is having currently.


I'm skeptical. Big company, design & manufacturing cycles on the order of years because chips are that sophisticated these days, I don't know what you'd expect to see within 3 years of a new CEO.


to be honest, they should have done a layoff 3 years ago, as soon as Pat Gelsinger came onboard, and fired 2 entire layers of management.

No engineers or support people, but just pure middle managers - fire 2 or 3 layers. if they did that 3 years ago, they would be in a better position now.


Gelsinger would be shooting blind and in order to cut enough to force compliance he would ultimately be sabotaged as the entire workforce unites against him unless he had the ability to bring in a few thousand loyal soldiers to manage every unit. He’s got the much harder job of ferreting out deep corruption and incompetence that perniciously hides in corners protected by knowing whose buried what bodies. Rough job that he cannot possibly accomplish. Instead he’s probably trying to root out these toxic messes and quarantine them in safe environments where they can be productive enough to realize ROI but disempowered and segregated while he works on fixing the broken systems and teams that got Intel to this point. Maybe this layoff was in his back pocket waiting for the right time to cut the shit loose with cover or maybe he failed to realize that should have been his first order of business and this was a desperate move to stop the hemorrhaging and Intels books are going to be even worse two years out when the consequences of firing qualified staff begins to manifest.


Your second quote has always resonated very strongly with me, both as a concept and also from the original-ish verse:

  Oh would some power
    the gift He gives us
  To see ourselves
    as others see us
  It would from many
    a blunder free us
  And foolish notion
  What airs in dress
    and gait would leave us
  And even in CPU fabrication!
— adapted from “To a Louse”, R. Burns


Intel should never be directing its branding at the end user in the first place. Joe Public doesn't give a rats ass what is powering his device. Ask anyone off the street if they know who AMD are and what they produce. Concentrate on making good product and the device manufacturers will come to you.


>Ask anyone off the street if they know who AMD are and what they produce.

That's why Intel, AMD and Nvidia are super obnoxious with plastering laptop keyboards with their stickers. They want the consumers to know what's inside.


Intel has spent the last 25 years training consumers to look for the "intel inside" sticker. It's the only thing that's kept intel afloat while they butcher their engineering tallent.

People shouldn't care, but they do.


I'm not so sure nobody cares. Because of the apple's M1-3 machines battery life, people already associate ARM chips with power efficiency.

People will always want their laptop/tablet/phone batteries to last longer, and if makers of those devices know that consumers associate ARM with power efficiency, they will want to take advantage of that.


They know that the latest Apple laptop is the fastest, but they don't know an ARM from a leg, and neither should they need to.


apple: "Out latest laptop has <magic maguffin>!"

consumer, at the store: "why doesn't this thing have <magic maguffin>!? bring me something that has <magic maguffin>!!!"


> instead it's this awful thing you have to turn off so that it won't screw up graphics in your web browser when you're using a Discrete GPU.

You can disable integrated graphics chips?


from https://genius.com/Top-cat-a-friend-in-need-panik-and-m-rode...

""" A Friend In Need Is A Friend Indeed

But a friend with weed is better

So if you want to get high

Bring your own supply

Or we will know you as a joker smoker """


Disclaimer: I used to work at Intel:

Specifically, I was the second person on the planet to game on a Celeron Proc. How? Me and my best friend ran the Intel Developer Relations Group Gaming Test Lab in the later half of the 90s.

Our job was to specifically test all PC games that were being developed against the Intel Celeron procs and the amd alts... with a focus on Subjective Gaming Performance of the Celeron as an "Optimized" feature leader whereby developers were given handsome marketing monies to optimize the games against latest SIMD instructions to allow for the games to be more performant on the Celeron.

THe goal: Show the world that a $1,000 fully capable gaming PC was possible, in their budget, and desirable.

---

The Issue at the time was the graphic bottlenecks -- all the pieces had yet to come to fore: AGP, Unreal, OpenGL, ~~NERDS~~ NURBS!, Games, Graphix Chips (VooDoo, 3DFX, blah blah)

Celeron should have died long ago - but certainly when the first, actual GPUs came along and did the heavy lifting.

I have a lot of thoughts about Intels mess (Transmetta really fucked Intel up in the way an abusive step-relative would) and caused them to lose focus...

Then, just the ridiculous amount of marketing over engineering....

(if anyone worked at intel in the DRG circles in SC5 - and has access to emails of that day - search my name and the thread I started asking why we cant just physically stack CPUs on top of eachother... (this was prior to the Voxel timeline) and was laughed at. it wasnt until several years after that I went on a hike with a labs head and found out about the 64-core text CPUs that were only coming out)

---

I was just having shower-thoughts about intels future as effectively computing grout -- a mechanism to get data from real-world INTO NVIDIA GPUs and then displayed again real-world. And thats it. Thats the only thing Intel may be doing in the future - is grout to display the data components solely computed, as NVIDIA CEO stated himself "All compute will be done on NVIDIA chips" -- delivery of the data through intel's grout (minimally) then again delivered to an NVIDIA desktop GPU...

Intel is like the maintenance staff of a home. NVIDIA is the architect, interior designer, party planner and end user.

(Intel was my first ever stock package: $70 with a $125 option price. 10K shares. I left before ever vesting... it was the late 90s and I had to chase the dream intc: https://i.imgur.com/U9PWURv.png )


Even then, why is Intel better grout than AMD today?


Good point to ask -- I meant any "CPU Only" technology company.

Personally what Intel should have shifted to was owning and building all the Fabs.

Sure they have them and are building them etc... but they missed the timing a bit.

I dont follow them close enough any longer - but they really should have gone all in on being the US TSMC... rather than the bit of catchup they are in...

They arent going anywhere (the birth of Intel from Fairchild was in Missle circuits - and the US War Leviathan will always have intel circuits in the big-ticket items. Forever.


Does the new owner provides the same performance?


They were leader for over 10-15 years. They wiped the floor with Nehalem left and right.

The tooling of intel is still better than AMD.

The celerons and other types of CPUs are in plenty of notebooks people have who buy laptops below 1k.

The SSD topic happened, and went away i guess it was a margen topic. Low margin, ultra mass product lots of other companies happily building them.

I think its critical for intel to have knowledge about GPUs for a stable longterm strategy. It was critical 10 years ago when they tried and failed hard with larrabee but they need to do that or buy AMD/Nvidia. With AI and modern workload you can't just have CPUs anymore and with CUDA and ML that was clear a long time ago.

The market itself is also totally bonkers. When AMD surpassed intel, intel still sold like cake and still does. The fab capacity in the world is limited. If you can't get an Nvidia GPU for 200-300 because they complelty ignore this price range, than you have only AMD and intel left.

Intel took the market of low end gpus already through their iGPUs.

Intel knows how to build GPUs and other things than CPUs. It would be a total waste for them to stop.

Intel should have bought nvidia or done something else but the Intel CEO is an idiot. He stated publicly that Nvidias position is just luck. That is such a weird idiotic take its bonkers that someone who would say something like this is the CEO of intel.

The amount of R&D Nvidia did and still does left and right is crazy. He totally ignores how much Nvidia was doing the right things at the right time.

Not sure were intel is heading but with their fab knowledge and capacity, their cashflow and core tech, they should be able to swing back. I was hoping that would happen already, unclear to me why they struggle so so hard


The classic moat metaphor that the OP article and others use needs to be fleshed out to match Intel's predicament.

A moat protects a castle that adversaries want to take over. The presence of the castle defines what can and cannot be done with the surrounding landscape. But if the castle ceases to protect what people care about, or make meaningful additions to the environment, it becomes irrelevant and the presence of the moat makes no difference.

Intel's problem isn't that competitors want to storm the castle and achieve domination over the landscape that x86 controls. It's that the competition have built their own castles on the other side of the river, and a lot of the peasants are tilling the lands around Castle ARM and Chateau NVIDIA.

To put it another way, Intel thought the castle was "control of computing" and the moat was "leadership in x86" but irrelevance comes a little closer with each passing chip generation. It is fortunate for Intel that corraling an ecosystem into existence around the alternatives to x86 is an insanely difficult task, but it has been done with ARM, it has been done (albeit for a niche) with NVIDIA and it can be done again with whatever comes next.


>Intel's problem isn't that competitors want to storm the castle and achieve domination over the landscape that x86 controls.

IMO it's both. While the importance x86 is declining, AMD is aggressively eating what ever part of it is left. I also think that in the long term, as intel and amd build better x86 chips, the value proposition of ARM will slowly fade in favor of something like risc-v


Asking because I don’t know: what is the value proposition of ARM?


The value of ARM is that anyone with enough money can license Arm cores and incorporate them in their own products, which can be optimized for some custom applications.

The level of customization possible with an x86 CPU is much less. You must buy a complete computer board or module and incorporate it in your product.

While for custom applications it is easy to create a superior solution with Arm cores, for general-purpose computers it is hard to compete with the Intel and AMD CPUs. All the computers with Arm cores have worse performance per dollar than similar x86 computers. (For instance there was recently a thread on HN about a credit-card-sized computer with an Intel N100 CPU and with the same price or lower as a Raspberry Pi, but with a much higher performance.)


AMD does offer custom x86 - see the steam deck, surface laptops and Xbox and PS4 and 5. Given there aren't a ton of small fish making custom parts they are excellent at what they are made for.

AMD is pushing x86 to Apple ARM levels that keep power use low enough (best I've seen is 16 hour battery life on a device - I think MacBooks best this still) but performance per watt I haven't seen ARM really top charts. They are awesome and I want arm and risc-v to really shine in laptops but the only player on the PC side is Qualcomm who was told to destroy their only flagship by ARM.


> AMD does offer custom x86

Not the same thing. On X86 you have to pay AMD or intel to design something for you.

In arm, you get to decide who design your chip or even have your own in house CPU design team.


These sorts of processors are available from Intel as well (if anything, more available, as you can buy low-end 5w processors with modern e-cores in them, eg. N95/N97). The commenter above is referring to these, and they are common in Mini-Pcs with 8-16GB of RAM and cost <200USD. These sorts of processors crush the ARM competition at the same level right now (ie. the pi).

In fact, AMD doesn't seem to have anything in the same segment currently, although they do exist in the higher tiers alongside Intel with their laptop processors.


Radxa X4 low-cost, credit card-sized Intel N100 SBC goes for $60 and up https://news.ycombinator.com/item?id=41007348


You’ve missed out one of Arm’s central value propositions which is power efficiency and the reason why it has more than 99% of the smartphone market.


There have been tons of proprietary CPU architectures with the same power efficiency as Arm. Only the x86 architecture is an outlier that requires an unusually complicated instruction decoder, which may degrade a little the energy efficiency.

Arm has eliminated most of the competing CPU architectures not by providing better power or energy efficiency, but by its business model of being licensable to anyone.

The ARM ISA was somewhat better than MIPS and SPARC, both of which have been handicapped by including some experimental features that have been later proved to be bad ideas. However there have been many other RISC ISAs more or less equivalent with ARM. Only the business model has extracted ARM from the crowd, not its technical advantages.


You're conflating the characteristics of the ISA with the value proposition offered by the designs using it. Not the same at all.

Other historical architectures typically targeted higher performance.

Arm specifically went after low-power applications, which continues today when we see the priorities in the design of Arm and x86 cores, for example.


Arm "specifically went after low-power applications" only in comparison with x86 or in comparison with a few other architectures restricted to workstations and servers, like DEC Alpha or Intel Itanium.

Before 2000, there were at least a dozen CPU architectures that went for the same low-power applications as Arm. There were a lot of microcontroller or processor vendors and each one of them had at least one proprietary ISA, if not 2 or 3 or even more different proprietary ISAs.

More than 20 years ago, I have redesigned various kinds of communication equipment, in order to replace many other kinds of CPUs, for example Motorola MC683xx or ColdFire or IBM PowerPC CPUs, with Arm CPUs.

In none of those cases the Arm CPUs had a lower power consumption or any other technical advantage. In fact in all cases the Arm CPUs were technically inferior to the CPUs replaced by them, which has required the implementation of various hardware and software workarounds.

There was only one reason why the Arm CPUs had been selected to replace their predecessors with different architectures, and that was the lower price. Their lower price was in great part due to the fact that there already were many competing vendors of Arm CPUs, so if you did not like one of them it was easy to replace it with another vendor.


I get it that you don’t like Arm but that doesn’t change the fact that low power was and is central to their value proposition - and this latter fact doesn’t preclude other firms having low power offerings.


It's been said many time and the correlation between ISA and power efficiency have been debunked many time. ARM is power efficient because most ARM implementation are power efficient. Right now x86 AMD strix are about on par with qualcom x elites


Agreed. That power efficiency is still a central part of the Arm value proposition though. Others are competing with real designs on this with Arm in laptops but not in - for example - smartphones.


(simplifying) ARM provides verified, tested, standardized, reasonably well designed chips (logic circuits) that your company can purchase a license for and then send that chip design / logic circuit to be etched on a wafer, cut, encapsulated, and soldered to a printed circuit board.

Those ARM CPUs support a standard (but ARM-flavored) assembly programming language. (Formally: Instruction Set Architecture)

Designing your own chip previously was risky because you might have logic or hardware bugs in your chip that were very hard to debug, and then you hope that someone will bother to write assembly code that works on your chip. Since you probably designed your own assembly language that co-evolved with your chip, those assembly code developers are going to be sinking a lot of time into understanding your chips and assembly code quirks to wring performance out of them.

RISC-V standardizes a RISC-V flavored assembly code (ISA) and also provides some certification test packages to prove that "this particular chip design" can execute the RISC-V assembly language according to specifications.


In PCs, ARM CPUs perform just as good or better than AMD64 but have much better battery life. In the cloud, ARM CPUs are much cheaper (ca. 25% less) for the same or better performance.


Not quite. I think we need to split the value of ARM the instruction from specific implementation.

1 - In term of pure efficiency, nothing magical about ARM. Looking at AMD latest strix laptop platform they are about on par with qualcomm new arm laptop chips.

Apple M* CPU are still better. However, a lot of that efficiency is platform derived.

2 - The lower cost in the cloud is a function of the middle man being removed. Amazon is selling graviton cheaper simply because they don't have to pay the markup of AMD or Intel.


AMD's stormed the castle with Ryzen long ago and planted their flag, but since everyone's still asleep at Castle Intel they haven't really noticed.


There's sorrow and confusion at intel that the server and desktop markets are shrinking. People just aren't buying as many computers any more. Not to worry though, they'll probably want more in the future.

The rest of the world has noticed that people are buying lots of computers. This doesn't seem to cause any cognitive dissonance for intel though. Perhaps "computer" means "thing intel sells" and excludes the work of the heathen outsiders. It makes for some quite confused reporting from the financial press.


We are coming up on 7 years since the first Ryzen chip. In 7 years AMD went from very behind to a little behind then on-par and finally now market leader. The fact Intel let this happen in such a short time frame is a bit mind boggling.


AMD and Intel have always been racing each other closely. Remember when the Athlons outperformed Netburst P4s while Intel was trying to get the latter to higher clock speeds? Then the Core series put Intel in the lead again, and now they're losing to AMD once more. Here's some 15-year-old benchmarks for your amusement:

https://www.tomshardware.com/reviews/athlon-64-power,2259-9....


AMD is not the market leader in any segment, by any approximation: https://www.theregister.com/2024/05/10/amd_gains_on_intel/


In server CPUs, Intel still has a larger market share than AMD, but judging for the published financial results, where most of the loss is in server CPUs, Intel has succeeded to keep that diminishing market share only by accepting a great loss caused by huge discounts, which has determined the action price fall, so perhaps trying so hard to retain the market share has not been an optimal decision.

So AMD definitely leads over Intel from the POV of the profits obtained in the server CPU market segment.

There are also various small markets where AMD leads comfortably over Intel by volume, e.g. Amazon currently sells much more AMD CPUs than Intel CPUs.


My phrasing is confusing and I apologize for that. I do not mean market leader in that there are more AMD CPUs than Intel CPU. I'm speaking about hardware performance.


You are right for multi-core performance, but Intel has still a slight edge in single-core performance, and in idle power usage, which is important to many. Intel also has a stronger offering in the now-popular mini PC segment.


Didn't they lose the single core lead recently, despite running their chips so aggressively clocked that they're burning out at stock settings?

The APU systems from AMD work really well as mini-pc systems, though I'm unfamiliar with Intel products there. Perhaps they're better.


The X3D chips typically edge out Intel for single core performance and Intel chips also need to be run hard to match them. Idle power consumption is pretty bad on AMD desktop chips, but under load AMD is typically far more efficient. Regardless, on the desktop any of Intel's advantages at the high end are somewhat moot considering that all of those chips ship with and are currently running microcode that is overvolting and degrading the silicon, causing permanent damage. There's plans for a fix, but there's a good chance that fix will come with lower performance. The N100 is a bargain, but as soon as you want passable graphics AMD becomes the only option.

Intel are facing credible competition from both AMD and ARM in most market segments amidst a quality control catastrophe, cultural problems, R&D problems. Their future outlook doesn't look very good.


If we are mentioning mini PCs then I think it is only fair to mention that every single console went to AMD for chips.


Market leader in what? Intel's Q2 revenue is over double that of AMD's. Intel still controls well over 60% of the x86 space. Intel and AMD's most performant x86 offerings are fairly close to each other.


Performance. Intel still manages to squeak out some wins against AMD when it comes to single-threaded CPU task but in every other metric, they are chasing AMD.


Not only in performance, but also in profits, as shown by the Intel vs. AMD financial results. Especially in the server CPU market segment, where Intel has a diminishing market share and losses of billions, while AMD has an increasing market share and profits.

The new Zen 5 has much better single-thread performance than any Intel CPU (e.g. the slower 5.5 GHz Zen 5 launched this week matches a 6.0 GHz Raptor Lake), so for a couple of months, until Intel launches Arrow Lake S, AMD will have much better single-thread performance. After that, Intel and AMD will be again at parity, with negligible differences in ST speed.


Power too. I think we're still in the happy place where buying epyc is cheaper than being given xeons for free after you look at the electricity bill over the life of the machine.


Is this actually true? At 200w power draw 24/365 you're talking 1800kwh. at 0.2$/kwh that's 360$.

These server processors seem to be charged out at multiple thousands of dollars. Is the difference in efficiency in server actually as large as claimed? Surely a sufficient discount on the capital cost of a processor can more than make up for extra power usage.

I guess it all depends on the comparative price/power consumption, it just feels like the difference would have to be rather large to me.


You're not considering the core count difference. Amd has 128cores 256 threads at 2.2ghz at 360w tdp while Intel has 144c/144t, @ 2.2ghz @ 330w Tdp. Cloud providers care about density and power usage. More cores per server = less power = more servers per rack = more capacity = more opportunities for sales of products.


I'm not really in the space - I was curious. I think people tend to overstate the importance of power consumption relative to the price of the products they buy and the value of their time (eg. if it's a workstation part, higher performance is worth a significant tradeoff in power if it gets jobs like compilation done 10% faster based on the employee time it can save)

For servers, I'm always curious because even though they run 24/365 (so power consumption is v.important), the capital cost of new server chips is incredibly high - eg. those 144c chips I presume you're referring to cost 10k+, so even over a 5y service life that's probably only 20% of the chip only, and relative to the AMD chip the additional inefficiency could easily by compensated by an appropriate discount.

Obviously all of this is why intel still exists in the DC, they just can't charge the same prices as AMD can is all.


With great power comes great heat output. Lower power = lower heat output = lower bill for cooling. or the same bill at more capacity = more margin = more profit for the cloud providers :) Or the other thing to consider is, less power usage on a global scale = less co2 output.


It looks like the gap has narrowed, though TDP might not mean what it once did. The comparison I remember is a 64 core Rome chip against two 28 core xeons where the Rome one was significantly faster and something like 1/3 the power consumption of the dual. I've got one of those 64C chips and haven't followed the market as closely since.


Is that true?

At least for consumer desktop CPUs AMD is significantly ahead for gaming (with X3D) while for MT/productivity workloads Intel and AMD seem to be pretty even (if we ignore power usage and the whole melting CPU thing..). Which makes since Intel generally offers more cores per $ these days.


I feel like Intel has embarked upon a bunch of good ideas like SDI (https://www.intel.com/content/www/us/en/manufacturing/softwa...), IoT / Edison (https://ark.intel.com/content/www/us/en/ark/products/84572/i...), Silicon Photonics (https://www.intel.com/content/www/us/en/products/details/net...), and SDN (https://www.intc.com/news-events/press-releases/detail/640/i...).

But, they manage to fail to capitalize on any of these. What's wrong with them?


Ask them why they sold their ssd division. It’s almost intentionally bad.


A new https://en.wikipedia.org/wiki/Xeon_Phi would be killer for AI right about now.


The "paradox of the x86" is simply the classic Innovator's Dilemma. In fact the history of the last 20 years against ARM could be a case study right out of the book.

Worse for Intel, AMD flubbed it time and time again, but now Intel is too weak to defend against a resurgent AMD.

Meanwhile they are cutting headcount deeply yet only now suspended the dividend. Madness!

They should take a page out of AMD's book and spin off fab. That new company can then be flooded with "strategic" government aid, and maybe the rest of intel can catch up (or not) but it would at least give the shareholders a better chance. Right now the combination is acting like two anchors tied together.


> They should take a page out of AMD's book and spin off fab.

It's way too late for that now. But even with the benefit of hindsight if we go back by a few years abandoning their main potential advantage just to compete for limited capacity at TSMC with everyone else wouldn't have been the best decision IMHO..

> flooded with "strategic" government aid, and maybe the rest of intel can catch up (or not)

How could that ever possibly help Intel's foundries to catch up if Intel itself switched to TSMC? The "government" doesn't need leading edge nodes so they'd just end up in the same as spots as Global Foundries.

To be fair they did outsource their last gen low-power/laptops chips to TSMC which is probably why they now seem very competitive with AMD/Qualcomm.


One of Intel’s strengths in roughly the 90s and 00s was the tight coupling between its strong fab technology and its strong design team. This this was the model for Motorola, IBM, TI and everybody else.

But by now both sides are suffering, and management has to try to fix them both simultaneously.

As with other industries, semiconductors evolved as an integrated industry. But now both parts are fiendishly complex, and rather than integration being a strength, it’s more like a conglomerate.

You can’t just move your design from one process to another so a spun out fab would start with mostly Intel jobs, just as AMD, IBM etc did when they sold off their fabs. But the standalone fab would possibly find it easier to hire customer-oriented people and change its culture, and the two parts could concentrate better on their needs. It would give the shareholders a better chance too.

It’s not a great solution, but the current situation is dire.


Intel's decline was obvious when I worked for them in 2010, their compensation package wasn't competitive with FANG and they consistently missed out or lost their better engineers.


The article quotes a mention of Intel getting into the foundry business - this seems like the most obvious good move to make, even if a little late, right?

Being able to operate a general fab for chips seems far more important now than the design of the chips, since design has been somewhat commoditized by ARM. Any big downside for this move?


the downside is that the foundry process has to work properly, and by all reports, it hasn't started working properly yet.


Only report I saw on it was that Intel said they have tested working 18A chips.


They have to be able to mass produce on that node for it to save them. Samsung has been claiming technological leadership for that past few years with their 3nm node but nothing has come to market using it. IBM demoed a 2nm test chip in 2021, but has no ability to mass produce.


There was lots of talk at the last earnings about "clean sheet redesign" to achieve "world class foundary" and a "world class semiconductor. Sounds like getting the organisation ready to split in two to me.

That probably kills the vertical integration that made intel the world leader in the past though.


They just slept for a decade... I had an overclocked i7 3970k (2012) and there was no need to update for 8 years. The perfomance increase was always marginal. I finally pulled the trigger when AVX happened.


And then you learn AVX is only available on some cores, lowers the operating frequency of the whole cpu when you use it, and generally seems like an unstable prototype


I was in the same boat with an overclocked 3570K. I only just replaced it last year and grabbed a 5800X3D. I hope this thing lasts me a decade too.


I think referring to bankruptcy is excessive. Intel bulls might be going bankrupt, but the share price doesn't affect most companies unless their only reasonable way to raise money is through the stock market. The Intel bond yields are not too high yet.

The question then becomes, is this a low (trading around book value) with recovery in the future, or is it the beginning of a long, dark time, leading to an inability to raise money down the line?


When I left Intel, it was already going downhill and this was almost a decade back. Good employees will also see through and make a move and we all know how the negative spiral works.

FWIW, my previous comment https://news.ycombinator.com/item?id=27597749, more than 3 years back. It seems they have hit real rock bottom with no way up!


That is very questionable reporting right from the start:

- First graph: Yikes, that is going almost all the way to the bottom. Oh, the y-axis is fake.

- "Revenue for the quarter was a third lower than in the corresponding quarter three years ago." Wait, wasn't three years ago covid times, which lead to a giant boost in demand for laptops and pcs?

https://www.windowscentral.com/canalys-pc-market-2021-report

Maybe the article gets better after that, but that seems not okay.


I don't know, a lot of these explanations seem more like reading astrological maps in hindsight. Example:

> Intel passed on making the System on Chip for Apple’s new iPhone, Instead Apple used an Arm designed processor in an SoC built by Samsung. Too late Intel realised the mistake.

Mistake? AMD didn't even try to build SoCs to compete with ARM / Qualcomm / etc. They seem to be doing ok.

Now, you could say "Well, throwing 10^10 USD on trying to build an SoC was the mistake" - maybe, but then it's the opposite mistake than the article claims.


No it was a both a huge mistake and a symptom of underlying problems. Making smartphone SoCs is a huge profitable market that would have given Intel massive volumes to support R&D etc and slowed TSMC. The fact that they couldn’t manage it with every apparent advantage is very telling.


Feels like you're rewriting history, to me. Smartphone SoCs are not hugely profitable, and a big point of the past 10 years were Chinese manufacturers entirely ignoring ARM IP to increase profits. The only people that make money off ARM hardware are the export-officiated manufacturers that don't pay per-unit licenses; which is exclusively Apple. Nobody else has a more permissive ARM architecture license, not even Nvidia.

Intel at the time was fabless, would have needed to license or design a RISC architecture, and would have ended up just as squeezed as any other part of Apple's supply chain. And if Intel made a serious profit, Apple would have replaced them anyways. It's an all-risk-no-reward scenario.


> Intel at the time was fabless

????????!



That's a lot of fabs for a fabless semi company.


Yeah, what?!


Best take I’ve read here or anywhere. Thanks for the clarity.


> Making smartphone SoCs is a huge profitable market that would have given Intel massive volumes to support R&D etc and slowed TSMC.

Not quite, and that's the core of the "innovator" dilemma. The reason why intel passed on making SOC, is the same why nvidia decided to pass on getting into xboxss and playstations. Those market have much lower margin than server side business.

AMD, and TSMC can operate in those space because they are much more efficient companies, intel is not.

Intel got addicted to fat server CPU margin, grew inefficient...


Intel's smartphone mistake was trying to push x86 on a market that didn't particularly want it.

Android might have been nominally architecture-neutral-- I know some bottom-end products were MIPS-based for a while-- but by the time the Atom Phones came out, the ecosystem was clearly ARM-centric. It wasn't a market where x86 had natural advantages-- you're not run Microsoft Office on your phone

So you're selling a product that will come with built-in FUD of expecting teething troubles and compatibility snafus. You'd have to knock it way out of the park in performance to break through that. Conroe did that after the P4 burnt their image. The M1 did it. Atom never came close.


I'm buying PCs for myself for three decades and never chose Intel for my build (apart from the Pentium 133 in my first pc computer). Every time I considered it but alternative was just significantly cheaper option at power levels that interested me.

The only Intel's I got were in laptops where I was interested only about the price of a whole PC that interested me for other reasons than CPU.

Since Internet became a thing I choose CPU by downloading price and benchmark results, drawing price/power chart and choosing a point on a good edge of a point cloud that's not unreasonably expensive. It never happened to be Intel.


Intel's main problem is its people. Sorry to say.

The management culture there is insanely siloed and toxic, full of scar tissue from previous periods of political turmoil and a few relentless sociopaths who absolutely will not stop climbing just because it's obviously damaging to the company.

The engineering culture is full of secrets, moats, and the general sense that while making progress might be your job, stopping it is everyone's job. And there are always more of them than there are of you.

Perhaps most fatally, responsibility for decisionmaking is ultimately delegated to the process of decisionmaking rather than the people involved. The resulting proliferation of checkboxes and box checkers have elevated dysfunctions like "the perfect being the enemy of the good" and "none of us is as dumb as all of us" into de facto mottos for large parts of the company.

All of this happens against the backdrop of eroding mind and marketshare, a total lack of real product clarity, and a pervasive culture of bullshit that puffs politically convenient non-accomplishments into events worth of companywide applause while spinning, downplaying, or even hiding bad news of calamitous import.

Intel has vast reserves of talent but without deep (IMO, impossible) cultural changes it will inevitably fail under its own weight.


I had to create an account to say that I agree with this comment.

I am a former Intel engineer who worked there for 8 years. I left Intel 2 years after the ACT mass layoff in 2016.

The culture there is extremely toxic from top to bottom. The VP's and top level executives were out to get stocks and golden parachutes. The middle managers played politics. In addition, they also pulled a blind over leadership on many significant happenings. The engineers / technicians kept secrets as a way of keeping themselves relevant and valuable in case of layoff events.

The main and sole money maker for Intel is getting quality silicon out of the door and that should be the number one objective for everyone. However, due to stack ranking, that is getting as many bullet points as possible in order to appear relevant when it comes annual review or mass layoff events, many people come up with bullshit projects that while on the surface look extremely impressive, have nothing relevant to pushing quality silicon out. The middle managers play favorites and push or highlight those bullshit projects.

When they fly expensive executives around for the BUM's (Business Update Meetings) or to pitch some bullshit ideas, not many had the courage to ask the relevant questions. The few who asked them were met with canned and non-informative responses that only showed that these expensive suits had no awareness of what was going on externally. At the same time, we were too busy patting ourselves on the back, claiming that we were number one.

We were too busy playing political games and competing internally that we forgot about the real external competitions, and then it was too late and they zoomed right past us.

I doubt the CEO (past and present) or VP's care or even have an inkling on what has been going on in the culture.


It sounds a lot like the chaos and inefficiencies of the caste system. Hmm.


Tbh, it sounds like every group of humans...


Is Intel poaching TSMC senior engineers in Arizona ( https://tinyurl.com/duvfkrm2 ) related? Cause, I would be really pissed off, if I we'd agreed to build a factory in foreign land out of desire to build good long-term relationships, and then, that country's strategically important corporation would decide take away our key engineers.

What's normalized in context of free opportunistic market can be seen as treacherous in context of politics and in the behaviors of strategically important enterprises of countries.


> Intel passed on making the System on Chip for Apple’s new iPhone

Is "passed on" the right term? I seem to remember reading that Intel quoted an extortionate price for making the chips, hence Apple passed on the offer.


I know what to do... a re-org!


I'll call it the Volkswagen Trap, or more general the ICE one-way street


> Intel couldn't break into smartphone SoCs with clear process leadership and the financial strength to invest heavily. Why should it be able to break into other competitive markets today when it no longer has those advantages?

Maybe because you're looking at it backwards? There must be some conspiracy among Softbank investors to portray everyone that doesn't pay for ARM licenses as a Muppet. Intel has no competitive advantage manufacturing licensed RISC CPUs when their desktop and server markets are hot, and even when x86 dies it will still be less attractive than owning fabs. With how bad the Cortex designs continue to be and how much leeway ARM gives licensees like Apple, even businesses like Qualcomm can't be bothered to take the ISA seriously. It feels disingenuous to say ARM is the panacea when designing ARM cores would have left Intel in an even worse position.

Look at Samsung; a company people on this site would describe as misery incarnate, but chances are they're typing their comment from a device using Samsung-fabbed silicon. They're not even that competitive either; they just offer a cheap alternative to TSMC that offers OEMs an economically-minded option for less dense components.


> It feels disingenuous to say ARM is the panacea when designing ARM cores would have left Intel in an even worse position.

Sorry, where does the article say Intel should design Arm cores in 2024?


I called a while ago this on copium filled people, probably intel employees, about their 1.4 "process" and how they'd surpass TSMC, like buddy, come on, you guys got stuck for 10 years in 14nm, the manufacturing issues nowadays are a showing that Intel is incapable of competing with leading manufacturing companies in the world, they are just too error prone, their GPU business is failing hard because of significant delays, and I don't see a way out, and they are stuck on x86 when ARM is gaining ground.

They have become more of a political tool of a company to receive subsidies like Boeing than a real actual competitor to TSMC


Intel is simply a product of winner take all capitalism when economies of scale are allowed to reign completely. TSMC took all the business at low margin and became so massive that R&D costs were manageable.


Reading the comments here gives me the impression that Intel is indirectly responsible for the Taiwan conflict. If they would have done their job better they would be on TSMC level now and that inside USA.


Without TSMC, the west would have no interest in defending Taiwan so it would have already been invaded by West Taiwan.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: