Most of Nvidia's "strength" is the heavy lifting done by the folks over at TSMC.
Apple and AMD both have access to the same TSMC technology, but neither of them are willing to invest any more into AI than they are currently. Apple ships an inference solution that is a combination of GPU and NPU and mostly okay for inference at home, and AMD ships a high performance enterprise compute solution that scales the whole way up to the needs of supercomputers without needing to make a purely AI-only product line.
If this was the next big money making frontier, then AMD would be chasing after it too. Instead, they sell a pretty damned good enterprise compute product that actually has a life outside of the AI bubble, and will keep selling it to happy customers long after the AI bubble pops.
The question you need to ask is how bad will Nvidia's collapse be. Like, I'm pretty sure some form of the company will still exist, but they have nothing stopping their valuation going from $4T back down to something more reasonable like $250m, they don't do anything any better than anyone else, its all 100% Jensen wooing people with his leather jacket collection.
Nvidia lost the contract for now two generations of consoles.
AMD was XBone, XSX, PS4, and PS5.
Nvidia was only able to get in on the Switch, a console that "sold well" because it stretched across two generations, and in a units/yr basis, sold less than either of the four. Nvidia sold the Tegra X1 to Nintendo at a break-even just to get the console down to $300, else it was a no-go for consumers.
The Switch 2? Basically DOA, I'm not sure how either Nvidia or Nintendo is going to downplay this.
It does nothing that any other compute API uses, and the majority of enterprise compute software doesn't use it and/or works on ROCm HIP with minimal performance loss.
A lot of research projects (such as all the early LLM research, given the topic) are written in Python and use libraries to shim all of that as well; PyTorch and ONNX both run natively on AMD and is covered under AMD's commercial support.
And then we come to the case of llama.cpp, which supports more APIs than any other inference engine... not only does it run on Nvidia/CUDA, it runs on AMD/HIP, Vulkan on at least 4 different vendors, SYCL on at least Intel ARC, BLAS/BLIS, Apple/Metal, Snapdragon's quasi-NPU, and Moore Threads (that new Chinese startup for domestic GPUs).
There is no reason to write greenfield code with CUDA today, and most people aren't.
Nvidia's products have been the biggest cash grab of all time. I don't think it's a matter of other companies believing it is just a bubble and therefore not attempting to compete, but that Nvidia hasn't left them room to compete effectively. That's what I'm trying to get to the bottom of.
Chris Sawyer has not been involved in TTD for quite some time, and any remaining rights he had were bought out officially in 2024.
Chris Sawyer was last involved in the IP when it was rugpulled out from under him in the early 2000s and sent to Frontier Developments, the Planet Zoo and Planet Coaster guys, to bury the IP in an unmarked grave with RCT3; Frontier is also the same guys that screwed Haemimont Games games over, the Tropico and Surviving Mars guys, leading to the studio being bought out and rebooted by Paradox to continue Surviving Mars development.
The IP ownership has been legally retained by Atari SA, aka Infogrames, aka GT Interactive, aka GoodTimes Entertainment, which has a very long history of screwing game developers and stealing their IP out from under them and also misrepresenting IP ownership and licensing.
Now, it is also worth mentioning that Chris Sawyer is anti-open source, so he probably personally approves of trying to steal money from OpenTTD players, even if he isn't personally getting a cut of it.
> Now, it is also worth mentioning that Chris Sawyer is anti-open source, so he probably personally approves of trying to steal money from OpenTTD players, even if he isn't personally getting a cut of it.
It's pretty rude to put something like that on him if he hasn't actually said that.
"The project has no blessing or support from Chris Sawyer and our view, it is both unethical and unlawful, involving infringements that may in some territories be criminal as well as a violation of Chris Sawyer's rights and those of his licensees - all of which remain reserved.
RollerCoaster Tycoon Classic, distributed by Atari, contains RCT and RCT2 rebuilt for modern operating systems under Chris's own direction.
Sincerely
Guy Herbert Director
Marjacq The Space 235 High Holborn LONDON WC1V 7DN"
Not only should you get rid of them, but also they are a fire hazard.
Also, do not accidentally plug surge protectors into each other, metal oxide varistors can star fires _without_ meaningful surge conditions when you do so.
I prefer to buy products without MOVs entirely due to the risk, with the exception of one, Tripp Lite Isobars; but I prefer to use series mode protectors such as Brickwall or SurgeX.
> Not only should you get rid of them, but also they are a fire hazard.
Are they not a fire hazard even when new? MOVs do tend to degrade with use (especially after they've gone conductive to snuff one or more surges). But AFAICT we can't really know, without potentially-destructive testing, whether a given MOV is in good shape -- whether installed last week, last year, or 30 years ago.
> Also, do not accidentally plug surge protectors into each other, metal oxide varistors can star fires _without_ meaningful surge conditions when you do so.
What is the mechanism that increases risk for MOV-sourced fires in this arrangement?
I've also noticed that many of the power supplies I've taken apart (for very pedestrian consumer goods) have internal MOVs on their line input. Whatever the mechanism is that increases risk, isn't using one external surge protector already doing that in these instances?
> I prefer to buy products without MOVs entirely due to the risk, with the exception of one, Tripp Lite Isobars; but I prefer to use series mode protectors such as Brickwall or SurgeX.
I prefer to avoid MOVs, too. Broadly-speaking, diodes seem like a better way to do it. (Transtector is another reputable brand that uses diodes.)
---
That all said, I've noticed over the years that problems with dead (presumed-to-be-hit-by-a-power-surge) electronics tend to follow particular structures. And the reason for this seems related to grounding more than it is anything else.
So when I find someone (a friend, a client, maybe someone online that I'm trying to help) complaining about repeated damage, I often ask about grounding. Almost always, it turns out that they've got multiple grounding points for the electronics: The electric service has one ground rod, and the telephone/cable feet/satellite/whatever is connected to some other ground.
This might be a dedicated rod, maybe a metal pipe; whatever it is, it is distinct from the main service ground. It happens all the time. (It is worth noting that the NEC prohibits this kind of configuration unless extraordinary effort is put forth. See 800.100(d), for example.)
The way that MOVs -- and avalanche diodes alike -- behave combines with the fact that the earth is an imperfect conductor, such that having multiple ground points promotes dynamic ground loops that can provide quite large potential -through- the electronics that we seek to protect.
The problem appears suddenly, and repetitiously. Everything is fine, and then ZANG: The cable modem gets smoked along with the router it is connected to. So the modem goes back to Spectrum or wherever to get swapped, and the router gets replaced again, until the next time: ZANG.
TV connected to satellite receiver, with coax incorrectly grounded? ZANG. Over and over again.
I'd see it all the time when I was a kid back in the BBS days: The phone line was grounded improperly, and computer was the only thing that connected to both electricity and the telephone line. Some folks would go through several modems over the course of a summer, which was very expensive -- while most people had no problems at all. Next-door neighbors would have completely different failure rates.
Structures with correct grounding tend to do very well at avoiding these issues, and I've fixed these conditions in subsequent years more times than I can count.
(A coworker installed a phone system at a business once, wherein he made extensive use of Ditek surge suppressors -- on the incoming POTS lines, and on the power inputs. It blew up one day. So he called Ditek to try to get at least the cost of the phone system hardware covered. They asked him to draw up a map of how the building was grounded and send that over, so that's exactly what he did. When they saw his map, they very quickly identified a ground loop and denied the claim.)
"What is the mechanism that increases risk for MOV-sourced fires in this arrangement?"
I wondered the same thing, and failed to find a satisfying explanation.
I can find plenty of reports of MOV fires, especially in situations where there's a persistent over-voltage, e.g. a 120 V site actually having closer to 240 V due to a floating neutral. But I don't see how chained MOVs make that worse in general. This blog post has some nice photos:
No clue about the actual reliability of this[1] article but the mechanism mentioned (new pathways due to changes in crystalline structure due to uneven heating) sounds possible.
Mastodon already won, by being used by people. Bluesky also won, by also being used by people. Not sure if this is a "winner takes it all" scenario? As long as you can host it yourself, I don't really mind where people are, both seem to work and have "won" for what they set out to do.
It is a zero-sum game in some sense, because you go where your friends or "influencers" are.
Mastodon ended up losing its user base to Bluesky during the early Twitter exodus because many influencers and journalists wanted to have an "elite" status and a special relationship with the platform, so they preferred a platform owned by Dorsey to some hippie open-source thing. Bluesky, in turn, ended up losing back to Twitter/X when it turned out to be a place where you mostly talk about how awful Twitter/X is.
I want to say that we don't need social networks where we constantly interact with hundreds of thousands of strangers, but I'm writing this on HN, so...
Just an anecdote - I never used Twitter/X, and never used BlueSky. Recently (about a year ago), joined Mastodon. I enjoy it, find a lot of value there, and have interesting conversations (recently about Mint Debian Linux & sound-systems, and also maker-space CNC design tools). There seems to be active investment in good features & quality on the platform, including making it easier to host your own organization server.
I believe, due to the format of engagement, its easy to spend a lot of time there scrolling - so consider
(1) only using the platform on your desktop computer, instead of phone,
(2) limiting time - 25 minutes a day is enough!
(3) Mute spammers, complainers, people with negative attiudes - you can't catch them all, but you can intentionally shape your experience over time.
(4) Subscribe to tags of your passions (example: #piano, #makerspace, #drawing, #cats, #jujitsu, #cncrouter, #3dprinting), and try to lean into that instead of getting caught up in endless political reactions - which never ends. You can be intentional, and subscribe to people who have a positive vision for the version of the future you prefer.
> Just an anecdote - I never used Twitter/X, and never used BlueSky. Recently (about a year ago), joined Mastodon. I enjoy it, find a lot of value there, and have interesting conversations
Same, more or less. Twitter started as a place to be interrupted by attention-seekers, and Bluesky was just "that but with less Elon Musk and more implementation throat-clearing." I never saw the point. Mastodon feels more like old-school Usenet, where you could find communities with shared interests, block the attention-seekers, and shrug at the usual human drama.
Curious, how many people do you need on a social network before you can find someone to talk to or before it is engaging enough for you?
I certainly don't need a billion users. I think I'd be happy with 100,000 users -- what is your number?
I think this is related to the question of how big of a city do you need to live in before you can find something to do and are not bored living there. I'm fine with a city of, say, 50,000-100,000. That is more than sufficient for me to find an appropriate number of likeminded friends and neighbors as well as interesting pursuits.
> Curious, how many people do you need on a social network before you can find someone to talk to or before it is engaging enough for you?
I don't think that's a meaningful parameter to think about? I'd say that on any social network, I have meaningful, ongoing relationship with maybe 20 people. I suspect that's the norm. But that doesn't mean you can join a social network with 20 users and get that. I mean, if it's a mailing list for friends and family, sure. But not if it's 20 randomly-selected strangers from around the world.
So the critical mass to make the "random stranger" type of a social network work is much, much higher than the number of daily interactions you need to keep coming back.
Yes, all you use is 20, but as the number increases the odds of you finding your 20 goes up. I'm saying in 100,000 roughly randomly selected people, I have basically a 100% chance of finding my 20. 50,000 is probably enough.
By the way, if your number is not the same as mine, I am not intimating that this makes you deficient in some way. Everyone has their own number.
Yeah and for me it should be mainly people like me. That's really what we do, we now live in a world that's too big for our minds to encompass, so we build little villages with like-minded people.
Some people call that bubbles, I call it sanity. I try not to spend my time giving out about the other side though. It just gives me negative energy.
As solid as the goals of Bluesky were from a technology perspective, the political driver of the user acquisition has the platform in the same category as Truth Social: political echo chambers. Two sides of the same coin. It's unfortunate because I don't think the branding is going away.
Mastodon has been great for tech communities in my experience though.
> It is a zero-sum game in some sense, because you go where your friends or "influencers" are.
Bluesky and Mastodon users can interact with each other (provided both parties opt in). I'm on Mastodon, but I see my friend's messages (he's on BlueSky) and vice versa. My replies show on up on BlueSky and vice versa.
I would love to see that work, but every time I've tried to set that up, it seems to fail. The bridges seem unreliable and non-responsive when trying to set them up or diagnose issues with them.
Sometimes I think more the toxic people who wrote about politics and identity on Mastodon moved on to Bluesky when Trump got elected.
I don’t see why it is “zero” sum, nothing stops you from posting to more than one social. I mean, I have relatives on Facebook and no prospect for getting them to change so I cut-n-paste what I posted on Mastodon to Facebook, Bluesky, LinkedIn, Tumblr, and all sorts of places.
Bluesky won over Mastodon because the fedi model is fundamentally flawed in its UX. For a flood of people wamting "Twitter without Nazis", Bluesky was a good match. I don't think Dorsey had anything to do with it, because the influx happened after he'd already severed all ties.
Some people are getting introduced to similar and in some ways worse UX on Bluesky now that there are some actual efforts to make it slightly less centralized.
Since developing on ATProto, one thing I have hoped for is less of this "winner take all" world. I think the protocol can be for much more than social media, could do dropbox if permissions and private data are designed well. This comment by the main protocol dev working on this does not inspire confidence on my part.
Threads being the biggest Mastodon instance and federating with mastodon.social (Meta signed contracts with instance maintainers to do so) and the other 3 largest instances (Pawoo, baragg (d_o_t) net, and mstdn (d-o-t) jp) taking up more that >70% of the total users using it?
That doesn't sound good.
The CEO sold all of us out and was the only one that made real money on Mastodon.
I won't doubt your statistics. In practice, my experience is that it really is distributed.
I just went to my feed (only people I follow), and although mastodon.social showed up a few times, the majority of users I interact with are on distinct servers. So out of 20 people, I see 17 different servers.
My feed will not be impacted much if mastodon.social dies.
Let me describe this in the most simple terms possible: You have speculators speculating about AI products. The speculators are not very smart when it comes to technology, and think RAM is RAM. There is at least three kinds of RAM that are important to this: DDR for system RAM, GDDR for GPUs, and HBM for high density enterprise products, and they are not interchangeable, there is no one-die-fits-all solution.
So, these speculators are like "oh no, more GPUs requires more RAM!", and then just start speculating on all RAM. Which of these RAMs are the ones that they need to worry about? Exclusively HBM, which is a minority in production, DDR and GDDR dominate production.
If you're into inference, and have older machines, you're buying Hxxx or Bxxx cards that use HBM, fit into dual slot x16 configurations, and you're jamming (optimally) 8 of them in. If you're into hardware that is newer, somewhere in the middle of the inference boom, you're using MXM cards. In either situation, the host machine has DDR, but if you're OpenAI, Anthropic, Microsoft, or Google, you're not building (more) inference machines like this.
The first two are buying Nvidia's all in one SBC solution: unified HBM, onboard ARM CPU to babysit the dual GPUs, has its own dual QSFP network controller that can RDMA, etc. No DDR or GDDR involved. Any machines built before this platform are being phased out entirely.
Microsoft is doing the same, but with AMD's products, the MI series that co-locates Epyc-grade Zen 4/5 CCDs with CDNA compute chiplets, running the entire thing off HBM, thus also unified and no DDR/GDDR needed. They, too, are phasing out machines older than this.
Google has a mix: they offer Nvidia all in one SBCs as part of GCP for legacy inference tasks (so your stack that can't run on AMD yet still can run), but also offer the same MI products that Microsoft offers via Azure's inference product, but also has their own TPUs that some of Gemini runs on; the TPUs run on HBM afiact. No DDR or GDDR here.
So, what does AMD or Intel do here? Lets say they waste fab time to make their own dies on the wrong process (TSMC and Intel-Foundry do not have for-RAM optimized processes)... they would be producing DDR and GDDR for a market that almost has its entire demand met. Intel lacks the die stacking technology required to build HBM, and TSMC I think can't do it for that many layers (HBM has 8 to 16 layers in current gen stuff iirc).
Micron, for example, already is bringing two large factories online here in the US to meet the projected growth in demand for the next 20+ years. When these factories finally start producing, it will not change the minds of speculators: they still seem to think AI datacenters need RAM, of any kind, and refuse to understand even the most basics of nuance. Also, when they come online, HBM will be a minority product; the AI inference boom is still just a bump in the road for them.
Nvidia kinda screwed their consumer partners, btw: they no longer bundle the GDDR required for the card with the purchase of the die. There is a slight short term bump in GDDR spot prices as partners are building up warchests to push series 60 GPUs into production, and once that is done, spot prices return to normal (outside of the wild speculation manipulation).
One last thing: what about LPDDR, used by AMD Strix Halo and Apple stuff? Speculation seems to have not actually effected it. I consider it as a sub-category of DDR (and some dies seem to work as either DDR or LPDDR as of DDR5, due to the merger of the specs by JEDEC), but since it isn't something you find in datacenters, it seems to have avoided speculation.
The Ryzen Max CPUs mentioned in the linked article? Uses LPDDR. Doubling down on the Ryzen Max product line might be a brilliant move.
> The speculators are not very smart when it comes to technology, and think RAM is RAM. There is at least three kinds of RAM that are important to this: DDR for system RAM, GDDR for GPUs, and HBM for high density enterprise products, and they are not interchangeable, there is no one-die-fits-all solution.
The commenter is also not very smart and does not realize companies making the RAM can trade capacity of one for another and any re-tooling at current price is still profitable.
The commenter also does not realize that is also true for lines currently making SSDs
They can trade capacity, but they generally aren't. The huge storage-only fabs owned by Samsung and Micron do runs that go for 9 months to 12 months.
Flash chips haven't been speculated on nearly as hard, and are suffering from the same sort of weird lack-of-nuance. Samsung, for example, isn't reassigning capacity to meet some sort of phantom datacenter demand that isn't already there, generically, across all datacenters, AI or not.
A lot of SSD price skyrocketing is largely "SSDs have RAM on them for cache", not "SSDs have flash chips, and they're both made at the same fabs"... which oddly effects low end SSDs that don't have external cache.
To make it worse, for the speculators who do understand this, because it isn't some universal homogeneous group, the flash chips that go into enterprise SSDs aren't the same that go into consumer SSDs.
The Big Three still aren't doing some major re-tasking of capacity, as the actual global demand isn't outstripping supply any more than normal. There is no short term problem to fix, speculators are just gonna have to stop hoarding toilet paper like its the start of Covid.
Edit: Oh, and if you want to ask how AMD/TSMC or Intel solve this? They can't, same reason why making their own in-house HBM isn't happening.
I'm glad Kioxia (formerly Toshiba) have been able to do that. However, I also know they've been having problems meeting demand for quite some time, and haven't been able to scale up nearly as fast as the big three have. There was an incident in 2019 and another in 2022 that killed entire runs of chips and screwed them during the Covid datacenter rush.
Micron killed Crucial because Crucial was a weird offering that competed with their own partners. This was always a weird problem, and it just didn't make financial sense to continue with it. One of the analyses I read was Crucial was less than 12% of sales.
Like, don't get me wrong, I've liked many Crucial products over the years, and even recommended some of them, but it was always weird they were trying to out-compete companies like Adata and other major ODMs.
The counterexample of this is Nvidia absolutely trying to kill their partners, and going to first party assembly and sales of products. Nvidia isn't even going to PNY anymore for ODM needs, but going directly to Foxconn.
Micron execs claiming its because of AI is a bit weird and revisionist, because they've been working on exiting the Crucial brand since long before they publicly announced it. The public didn't learn of any such plans until right before the Ballistix brand sunsetting was announced in 2021, but started years before that. Like, I know they're just playing to their shareholders, but its still a bit weird.
As far as I know, the current lineup is PNY still makes the workstation cards, possibly also the x16 server cards, but Foxconn is doing the Blackwell SBCs and MXMs, and those SBCs are a pretty big chunk of Nvidia's income right now. I also believe they have moved to Foxconn for the Founders Edition consumer cards.
Also, with the FEs, their partners are disallowed from making their own FEs, even if they make their own PCB from scratch and not based on any existing Nvidia design. Doesn't matter who makes the FE, it immediately puts partners at a great disadvantage if they can't make one too.
> So, these speculators are like "oh no, more GPUs requires more RAM!", and then just start speculating on all RAM.
Are you claiming that these speculators are buying DDR5 RAM and warehousing it somewhere? Or what exactly is the mechanism you are proposing here?
To me it seems much simpler - AI companies want HBM, but HBM and DDR5 share the same wafer production process and facilities, but the HBM process is much more fragile and takes three times the wafer production.
There isn't enough DDR5 RAM being produced, so prices go up.
There is no such thing as "consumer grade RAM". Servers still take DIMMs, ECC DIMMs just has more chips on it (previously 9 instead of 8, but now 10 instead of 8 as of DDR5; you'll see some DDR5 DIMMs with 5 instead of 4 because they're double die packages).
Micron, Samsung, and Hynix just basically sell you chips that comply with the JEDEC spec, and the DIMM manufacturers further bin them according to purpose. The highest end chips (that are stable at high clocks and acceptable voltages) end up in enthusiast performance products, the ones that don't work well at all but still meet JEDEC spec are sold to Dell/HP/Lenovo/etc for Grandma's Facebook machine, and the ones that are exceptionally stable at thermal design limits are plunked onto ECC DIMMs and sold to servers.
Also, as others have mentioned, its just a fab, and it can make any of the dies they're able to make. Whatever needs to be made to meet demand, they make, they just can't turn on a dime and react to quarterly concerns, and are locked into cycles that may range from 6 months to 18 months.
Side note that is also worth mentioning, sometimes you can order special bins of parts with features that wouldn't normally be available if you're willing to order enough. Recent example being Nvidia buying overclocked GDDR6 chips from Micron with additional features enabled; Micron was more than happy to become Nvidia's exclusive supplier for the custom GDDR chip if Nvidia was willing to buy out the entire run. Stuff like this happens every so often, but isn't the norm.
You just need an additional chip to move from "consumer grade" (ie no parity) to "server grade" (ie have parity). ECC support is actually in the memory controller which is in the CPU for the last 15 years. No magik.
The announcement means that they're closing Crucial - just like it says in the title and the first paragraph. The rest of that press release is outlining the mechanics of how that works + some fluff. Micron is going to continue producing the exact same memory chips in the exact same fabs. They're just not soldering it to a board, slapping the Crucial logo on it, and selling it directly to consumers. There's nothing stopping downstream vendors from buying Micron chips, soldering them to a board, and selling them to consumers as Micron was doing previously.
There's nothing in that press release that implies that the memory was somehow different (or "consumer-grade"). The _only_ thing they're saying is that they're ending their B2C business and focusing on B2B.
Didn’t you just describe the literal difference between consumer grade RAM that is soldered to a consumer format board vs a memory chip sold to a company to be soldered onto a product of the other company?
Calling it "consumer-grade RAM" is inaccurate - RAM is RAM. When you solder it to a board, you now have a DIMM that is carrying RAM chips. It's a semantic difference, but it's important.
So where are all these speculators storing DDR5, flash, and even spinning hard disks? Asking for a friend.
As a small buyer of all of those things supply at nearly any price has gotten very difficult to reliably predict week to week. When a lot of 100 64GB DDR5 sticks shows up available at a vendor, it’s at a take it or leave it price good for a couple hours. If I don’t pull the trigger they have another buyer for it and I might be waiting another month before anything becomes available again. We can no longer JIT for even failure replacement on our edge nodes.
Then you have the NVMe and even SATA SSD shortages. Still a bunch of very useful hardware out there I would love to find a decent deal on 8TB sata so I could repurpose it. Just doesn’t make any sense right now at current pricing and availability. Good luck trying to even find a batch of 12 of these disks at a time.
This goes for both enterprise and even prosumer I was willing to take for some of these uses.
Its mixed. Some of it really is Covid toilet paper behavior.
Datacenter customers, for example, have repair parts on hand; boxes of harddrives/ssds waiting to be put in, boxes of consumable parts, DIMMs waiting to replace ones that went faulty, entire machines already racked and waiting to take over for their fallen siblings, etc. Some of these customers added more to the spare parts pile. The big clouds manage their elastic demand of any sort of consumable or repair parts in volumes that are described in terms that fit cargo trucks in a quarterly basis, and they've already compensated.
Now, otoh, you have the truly psychotic people, that fill their basements with toilet paper, just hoarding more than they could ever use in their entire life. We've all seen that story where a guy was going to lose his house because he blew his mortgage money on toilet paper, and was selling it at a loss just to stay afloat. People like this exist in every crisis, and there's gonna be a headline in the near future where someone is gonna lose their house because they had like a hundred trays of DIMMs in their basement.
A few people I know who scrape eBay like its their job for electronics are just waiting for people to start fire-selling DIMMs and SSDs that got hoarded and they couldn't scalp people over; they're expecting half of MSRP or better sometime later this year.
> what about LPDDR, used by AMD Strix Halo and Apple stuff? Speculation seems to have not actually effected it
Good luck actually finding them on stock with 128GB+ RAM. I got strix laptop while ago, now price in EU is technically the same, but no stock. Maybe month or three
There is also claw hype. And large gwen3.5 models can run very well on DDR5 CPUs or mac minis...
Apple made a non-technological and purely artificial and somewhat capricious decision to not sell a product that was worthy of the Apple brand.
They continue down this road because next quarter is more important than next year, and sycophants continue to buy unusable products just to show off the Apple logo to people who don't view it as a Veblen good.
Apple could choose to sell to everyone, but instead sell to a minority. This is a choice. During recessions, it is not a good choice, and we have been in an almost persistent global recession since 2008.
As long as Apple can claim they have a $1T market cap, nobody wants to hear that the emperor is buck naked.
reply