Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I guess this is trend now because it's a contrarian / attention grabbing headline. See:

- "Thousands of CEOs just admitted AI had no impact on employment or productivity..." https://fortune.com/2026/02/17/ai-productivity-paradox-ceo-s...

- “Over 80% of companies report no productivity gains from AI…” https://www.tomshardware.com/tech-industry/artificial-intell...

But fundamentally, large shifts like this are like steering a super tanker, the effects take time to percolate through economies as large and diversified as the US. This is the Solow paradox / productivity paradox https://en.wikipedia.org/wiki/Productivity_paradox

  > The term can refer to the more general disconnect between powerful computer technologies and weak productivity growth


I keep seeing the "Productivity Paradox" highlighted over an over again. I think one thing people are missing with this specific technology is that unlike many of the comparisons (computers, internet, broadband, etc), AI in particular doesn't have a high requirement at the consumer side. Everyone already has everything they need to use it.

There will be a period like we are in now where dramatic capability gain (like recent coding gains) take a while for people to adapt to, however, I think the change will be much faster. Even the speed of uptake in coding tools over the last 3 months has been faster than I predicted. I think we'll see other shifts like this in different sectors where it changes almost over a series of a few months.


> AI in particular doesn't have a high requirement at the consumer side. Everyone already has everything they need to use it.

That isn’t actually true though, right now everyone has a hard dependency on a cloud service. That is currently sold to them at deep discount by companies that are losing billions.

When the market eventually corrects it’ll be interesting to see how much AI ends up costing. At the very least it will be comparable to the broadband internet connection you mentioned. Possibly a whole lot more.


>That isn’t actually true though, right now everyone has a hard dependency on a cloud service. That is currently sold to them at deep discount by companies that are losing billions.

Isn't that a huge red flag? If customers are being given this product at a discount and it still isn't showing a positive ROI for them, what makes people think it will improve once we're charged full price?


I think most people just assume it's magic, and are too awestruck by the hype to think critically.

Financially this feels similar to Uber's business plan in the 2010s; undercut the market with unsound pricing propped up by venture capital (PE was literally subsidising taxi fares; they admitted this and their intention to readjust, but no one seemed to care) then stop manipulating the market and allow fares to even out at (gasp) what it cost to get a cab before Uber.

The difference here is that the LLM market is human productivity; enormous subsidies are afforded to Anthropic, OpenAI etc. in the form of VC or compute credit, but eventually those debts will be called in, the free-to-use aspect will vanish because it's simply not profitable, and we'll be left with several premium products that only a few people will actually pay for, and even then that may not be enough to cover their costs. That's when the bubble will burst.


Actually I think there’s another option.

There’s the scenario where LLMs get more efficient in size, and to get 2026 SOTA performance you will be able to get it from consumer grade laptop.

Sure with a 1000B parameter you will get better performance but the average person will have it write some python script, not derive new physics equations.

So in a sense the demand for LLM intelligence with reach a plateau (arguably we are there today for avg person) so there will not be any subsidy required, because the avg person will not need the latest and greatest.

There’s not the same demand pattern for something like uber.


> There’s the scenario where LLMs get more efficient in size, and to get 2026 SOTA performance you will be able to get it from consumer grade laptop.

But isn't that bad for the AI companies, too? Because then people just run an ~2026 SOTA performance open source model on their laptop for free and not pay any subscription.


Yes and no.

Regular folks will not pay Anthropic, but NSA, NASA or research labs might.

I’m not implying this will be a good time for AI companies. I am saying AI as a technology can provide value without it being controlled by only 3 companies.


In a hypothetical future with 2026 level LLMs on a (high end) consumer laptop, I still think that majority of buyers would prefer to pay 20 USD/month for a service. Just for the convenience and flexibility.


> In a hypothetical future with 2026 level LLMs on a (high end) consumer laptop, I still think that majority of buyers would prefer to pay 20 USD/month for a service. Just for the convenience and flexibility.

$20 a month is a lot of money, I don't think the "convenience and flexibility" you get would actually be worth it, unless you've 1) got money to burn, 2) lack the skills to install software, 3) the open source community totally fails to develop a reasonable installer. The LLM service would probably be akin to a scam preying on ignorance, like those companies that will rent you a water softener for like $100/month.


It is a lot compared to what? I believe that a LLM capable laptop will cost considerably more than something that is good-enough for non-LLM productivity tasks. At least within the next 5 years. Say that it would cost 600 USD more, that would buy 30 months of subscription. It is this kind of scenario I think many people will favor the subscription.


Is it actually being sold at a steep discount? Anthropic CEO has stated they have high margins on inference, so training is the big cost center.


> Anthropic CEO has stated they have high margins on inference, so training is the big cost center.

I'm pretty sure that in corpo-speak "inference" excludes the cost of datacenter construction, GPUs and other hardware, manual data cleaning, R&D, administration, etc - basically everything except the power bill for inference.

I have absolutely no problem with companies that run inference only - plenty of them offer open models as a service - they're usefull and their accounting can be believed... but they don't have near $ Trillion valuations and they don't misallocate capital on a vast scale as the frontier models do.

The point of the OP is that closed models don't pay for themselves and, on the scale of the US economy, they provide minuscule economic advantages compared to the enormous investments they consume.


They've raise 70-ish billion (which they have not spent all of) and have a run rate of 14 billion/y as of now. All said and done those are great economics so far, even accounting for those extra expenses.


Your argument requires the run rate to reduce over time until OpenAI reaches profitability. However, even OpenAI has publicized that they expect their expenses to exponentially increase for their models to remain competitive.

So they are not profitable now & they have no idea of when they ever will be.

Worse, Gemini has guaranteed funding for continued training whenever the AI hype bubble pops.

Anthropic & OpenAI's only saving grace is that Google is generally terrible at product.


> Your argument requires the run rate to reduce over time until OpenAI reaches profitability

I was talking about Anthropic, but run rates don't need to go down, they just need to scale with revenue. For Anthropic specifically, this seems to already be the case.

OpenAI I don't know much about, but it would make sense if they were running at a terrible loss due to the ubiquity of free ChatGPT.

> Worse, Gemini has guaranteed funding for continued training whenever the AI hype bubble pops.

I don't see a scenario in which Anthropic has any problem financing their activity given their conversion rate of inputs to recurring revenue. Generally, bubbles popping means companies with bad balance sheets and bad economics die, but that just doesn't apply to Anthropic IMO.

OpenAI though, hard to say. They've lost all of the good will being the first mover gave them at this point, so they'll need to really lead product to make the economics work for them.


> Is it actually being sold at a steep discount? Anthropic CEO has stated they have high margins on inference, so training is the big cost center.

They're spending more than they're making. For the foreseeable future, saying "we could be profitable if we stopped training" if goofy, because they can't stop. If they do, no one will want to use their product because it will be overtaken by competitors within three months.

I get it that in 10 years all of this might peak and we're gonna be content using old models, but that'll be a very different landscape and Anthropic might not be a part of it anymore if they don't start making money before that.


> I get it that in 10 years all of this might peak and we're gonna be content using old models

I would personally be happy using gpt 5.3 codex for the foreseeable future, with just improvements in harnesses

IMO we're already at the point where even if these company collapse and the models end up being sold at the cost of inference (no new training), we would be massively ahead


That's a perfectly valid approach if you can balance capex and revenue. Why stop and try to be profitable when the economy is giving you the liquidity to push that down the road?

Models are already super useful, but if you can make them more useful by burning cash people are willing to hand you, why not?


Well, training isn't going to end soon if these companies keep on competing with one another whilst being neck-and-neck, so I'm not sure why you would ignore the cost of training in the ROI calculation.


Does the cumulative earnings from inference on a single model exceed its training costs?

That’s.. kinda the question.


Amodei says yes - each model pays for its training. But they're scaling up investment for each new run, so they're still happily in the red.

And also that may be the case for Anthropic who have fewer free users, a large enterprise business, and less generous rate limits on their subscriptions. I don't know if OpenAI or Google have commented. I suspect OpenAI is in a worse position given their massive non-paying consumer base.


Then why are they stopping people from having multiple max plans? If they are making such good margins on inference.


They have good margins on inference at API costs, i.e. $5/$25 per mtok input/output. They are almost certainly making losses on subscriptions, at least if people max out rate limits.

In the past 30 days I have burned $78.19 in API token costs with my $20/month Claude Pro subscription. In January I burnt over $300 in API token costs.


Because the power users of the max plan are subsidized at the upper end of usage by people who don’t approach the per account limit. In other words, the power users are getting more than they pay for, because most people don’t reach that threshold. If you let the power users have dozens of accounts, it has a multiple effect on the proportion of accounts breaching the profitability line.


They are likely aiming to maximize reach/mindshare. Get as many people hooked as possible. More important than minor upside from a few multi-Max users.

EDIT: also, the casual or gym-style members that pay every month but barely use the service are of course very valuable wrt margins


>That is currently sold to them at deep discount by companies that are losing billions.

They're not losing billions on inference, they're losing billions in the arms race of training.


At the large insurance company I'm doing some work for the big capability gains have yet to materialize. There are some pockets of workflow innovation but big institutions can carry a kind of inertia and are slow to adapt.

But as the organization slowly learns and adapts I'm sure the capability gains will materialize.


> AI in particular doesn't have a high requirement at the consumer side

Effective use of these AI tools need high critical thinking skills which are in short supply.


I would argue that the leadership and financial support behind AI (in its current form) does not have the patience or level-headedness to treat it as a long-term change, and is very much trying an all-or-nothing approach to making a long shift happen in a few years instead, or burn through nation-level budgets trying.

To my eyes, the problem is not the productivity gain arriving slowly, but the immediate draining of funding from virtually all other areas of innovation.


This. They created an innovation black hole and we will all pay the long-term consequences of it


> This. They created an innovation black hole and we will all pay the long-term consequences of it

But they may get rich soon, which is all that really matters to them.


This isn't new.

"The Productivity Paradox" is what they called it when people were skeptical that computer would end up finding a place in the office. There are articles from the 90s complaining about how much people are spending on buying computers for no real impact on productivity https://dl.acm.org/doi/10.1145/163298.163309


Even the source article in the first link, https://www.nber.org/papers/w34836

the same firms "predict sizable impacts" over the next three years

late 2025 was an inflection point for a lot of companies


All of the technologies mentioned eventually made things better. In order to work, gen AI requires a general acceptance of widely spread mostly mediocre outcomes. I don’t see how the comparison stands.


Seems like it’s an ever shifting goalpost when we are told that tons of layoffs etc are already happening due to the tech and yet when quantified it’s debatable if there’s been any gains at all


How to reconcile this with all the narratives of how powerful AI is, how it can perform right now at the same level of engineers and so on?

Once confronted with reality we have a "productivity paradox"?


I'll take it over seemingly endless deluge of FUD-slop from the past 4 years that claims you better get ready for the AI takeover coming for all the jobs in just-long-enough of a timeline that nobody will remember to hold the author accountable when their prediction is woefully incorrect, where the "advice" in the article is conveniently to pay for more AI tools.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: