Back in the dotcom boom years, Internet adoption growth was real - with a lot of people and businesses paying real money to get online. And yet, the dotcom bust happened. The existence of paying customers adopting a technology does not assure the sustainability of the industry at any point in time.
i dont see dotcom technologies replacing droves of workers and creating panics in /r/screenwriters, /r/3dmodelers etc. The displacement is real and we are not going back. Dotcom tech created fake problems so their 'advance technology solution' can be presented as solution. With AI wave, it's the opposite - business heads are actually thinking 'what other real problems can i solve with AI'. OpenAi and Claude don't even have to do any preaching. That's real value.
Two things could simultaneously be true: (a) trillions of dollars in value will be created/replaced/subsumed by AI solutions, and (b) even with near-universal adoption among knowledge workers, the value captured by the vendors of advanced AI solutions may never rise high enough to justify the valuations we're seeing now.
OpenAI's bet is that its frontier models will be so far ahead of the current status quo that, if they are the only ones providing those frontier models, they will be able to name their price (to end users and advertisers alike) while increasing their share of spend in the space.
But even today, last-generation and open-source models form a meaningful portion of adopted solutions. Not every application in 2028 will need the AGI-approaching GPT-10 - especially if those applications can leverage a relatively small amount of code, perhaps even written by that GPT-10, that can in turn orchestrate (say) DeepSeek V5 running on compute that can be obtained for pennies on the dollar.
OpenAI could become a victim of its own success, and cause a house of cards to take down the global economy in the process. I personally hope this doesn't happen, but there is real risk here.
You missed my point entirely. There was real value created during the dotcom boom too: Amazon, eBay, PayPal, Google, etc did not 'create fake problems'. Amidst the pearls of real value, were dozens/hundreds of overvalued, dogshit companies which never get a ROI for their investors, and the even marginally useful companies went under when the market correction hit.
There will be the WebVans of the AI boom era, we just don't know their names yet. There also will be Ciscos and Suns that will never reach their high-water mark ever again, or become obsolete in a few years, and sold for less that what people expect.
I was surprised the Register didn't provide the additional context of AFRINIC's revocation of Cloud Innovations IP4 blocks that were being sold anywhere but Africa[1][2][3].
FYI The Reg has covered the back story - https://www.theregister.com/2023/07/03/nrs_afrinic_review/ - but suffice to say anything written about Cloud Innovation quickly attracts the attention of its lawyers. But evidence is being collected and the full story will one day be told.
> And a shitload of countries in the hold of some warlord or other despot, or like Libya
Ironic for you to being up Libya as its collapse was instigated by the same former colonisers that couldn't stand Qaddafi - France nad friends. The results were deplorable, unimaginable levels slavery and waves of immigration to Europe via Libya due to it's now-porous borders have in turn destabilised Europe. Combined with other "interventions" by western counties in the middle east - Afghanistan, Syria. The argument that imperialism results in stability is false on it's face given recent history starting from the moment the first missiles were fired to kick off the war on terror.
> It has been shown many times before, with this shitshow being the latest example, that most "third world" countries simply do not have a legal system in place that can cope with rich Western or new-rich Asian exploiteers
It's almost as if the colonisers designed it that way. Sowing division, encouraging corruption and infighting while the actual administration was done from abroad, and just enough of it to keep the resources flowing out without regard for any long-term societal improvement for the natives. The same old colonial resource-extraction companies are still exploiting the countries that are allegedly "independent", but whose leaders are thoroughly bribed and have bank accounts and "investments" in Luxembourg, France, Dubai, the UK and the US that the former-colonisers are aware off. You want them to return and do achieve which goals, exactly? Cock-block China? No one is naive enough to think recolonisation can have benevolent intentions or executed benevolently.
If the west has an appetite for another round of sustained guerilla warfare and terror-tactics, then direct political control is the way to go, if not, then they should consider maintaining the status quo of monetary and service imperialism enabled by corruption, the WTO, the World Bank, the petro-dollar and the USD as the global reserve currency.
He did not, he got <50% of the total votes at final tally. People who parrot this are under-informed, or lying to claim a mandate his administration lacks.
So are the electric and cooling costs at Google's scale. Improving perf-per-watt efficiency can pay for itself. The fact that they keep iterating on it suggests it's not a negative-return exercise.
TPUs probably can pay for themselves, especially given NVIDIA's huge margins. But it's not a given that it's so just because they fund it. When I worked there Google routinely funded all kinds of things without even the foggiest idea of whether it was profitable or not. There was just a really strong philosophical commitment to doing everything in house no matter what.
> When I worked there Google routinely funded all kinds of things without even the foggiest idea of whether it was profitable or not.
You're talking about small-money bets. The technical infrastructure group at Google makes a lot of them, to explore options or hedge risks, but they only scale the things that make financial sense. They aren't dumb people after all.
The TPU was a small-money bet for quite a few years until this latest AI boom.
Maybe it's changed. I'm going back a long way but part of my view on this was shaped by an internal white paper written by an engineer who analyzed the cost of building a Gmail clone using commodity tech vs Google's in house approach, this was maybe circa 2010. He didn't even look at people costs, just hardware, and the commodity tech stack smoked Gmail's on cost without much difference in features (this was focused on storage and serving, not spam filtering where there was no comparably good commodity solution).
The cost delta was massive and really quite astounding to see spelled out because it was hardly talked about internally even after the paper was written. And if you took into account the very high comp Google engineers got, even back then when it was lower than today, the delta became comic. If Gmail had been a normal business it'd have been outcompeted on price and gone broke instantly, the cost disadvantage was so huge.
The people who built Gmail were far from dumb but they just weren't being measured on cost efficiency at all. The same issues could be seen at all levels of the Google stack at that time. For instance, one reason for Gmail's cost problem was that the underlying shared storage systems like replicated BigTables were very expensive compared to more ordinary SANs. And Google's insistence on being able to take clusters offline at will with very little notice required a higher replication factor than a normal company would have used. There were certainly benefits in terms of rapid iteration on advanced datacenter tech, but did every product really need such advanced datacenters to begin with? Probably not. The products I worked on didn't seem to.
Occasionally we'd get a reality check when acquiring companies and discovering they ran competitive products on what was for Google an unimaginably thrifty budget.
So Google was certainly willing to scale things up that only made financial sense if you were in an environment totally unconstrained by normal budgets. Perhaps the hardware divisions operate differently, but it was true of the software side at least.
> And if he Chatbot is serving the ads when I’m using it for creative writing, reformatting text, having a python function, written, etc, I’m going to be annoyed and switch to a different product.
You may not even notice it when AI does a product placement when it's done opportunistically in creative writing (see Hollywood). There also are plenty of high-intent assistant-type AI tasks.
My limited understanding is that CUDA wins on smaller batches and jobs but TPU wins on larger jobs. It is just easier to use and better at typical small workloads. At some point for bigger ML loads and inference TPU starts making sense.
I'm used to Python code being "Pythonic" - which is one of those "I know it when I see it" terms.