> There are certain bullsh*t jobs out there — some parts of management, consultancy, jobs where people don’t check if you’re getting it right or don’t know if you’ve got it right.
I suggest AI is cover to reign these jobs in. All those people who had a nice paying job, but did about 2 hrs work a day. AI is coming for them. In some respects, management previously looked the other way, but that is becoming less frequent and its easy to blame the reduction on AI.
This may just be wishful thinking, but is it reasonable to hope that it won't hit the middle class as bad this time around? Seems like most of the people holding the AI bags are the very wealthy, and it doesn't seem like these AI companies are employing a huge number of people.
The argument about Tether wasn't that they didn't have any assets backing the coins. It was that the assets they had were more risky than the boring <1 mo maturity treasuries they should be holding. Just because tether didn't implode , doesn't mean it wasn't a very real possibility. It's not very different from "the market can stay irrational longer than you can stay solvent"
every penny I made in the market over the last 30 years can be in some (or all) way attributed to exactly this. but this has to be backed by fundamentals. and fundamentals are weakening… this is a good read on OpenAI shit happening recently but it is industry-wide related - https://www.wheresyoured.at/openai400bn/
People here are still in denial that crypto will ever have a use case, meanwhile you have Larry Fink saying that he wants to tokenize the financial ecosystem.
Token do have use case, obviously. Like, we can see countless usecases with our own eyes. Tokens don't have any legal and at the same time competitive use case, that was the argument. All of those castle in sky constructs about how there would be property deeds on blockchain (technically and legally impossible), how there there would be game assets on blockchain (also technically impossible plus no game studio would ever be interested), how ticket scalpers would solved on blockchain (technically possible, but no ticket vendor is interested because they are the ones who benefit from scalpers) etc. And the list goes on. All of those legal use cases were a dud, because it is simply a shitty technology.
But to reiterate, there is great and massive actual use case for the tokens, yes. No one would argue against it :) . We just think that it is bad.
And they did not in fact had dollars to back them up. They did not had them for a few years continuously. The lesson is, you never bet even on a surefire stake if there is market corruption involved. Or if mafia money involved. In case of Tether it was both.
It was a good lesson for me personally, to always check wider picture and consider unknown factors.
“At the heart of the note is a golden rule I’ve developed, which is that if you use large language model AI to create an application or a service, it can never be commercial.
One of the reasons is the way they were built. The original large language model AI was built using vectors to try and understand the statistical likelihood that words follow each other in the sentence. And while they’re very clever, and it’s a very good bit of engineering required to do it, they’re also very limited.
The second thing is the way LLMs were applied to coding. What they’ve learned from — the coding that’s out there, both in and outside the public domain — means that they’re effectively showing you rote learned pieces of code. That’s, again, going to be limited if you want to start developing new applications.”
Frankly kind of amazing to be so wrong right out of the gate. LLMs do not predict the most likely next token. Base models do that, but the RLed chat models we actually use do not — RL optimizes for reward and the unit of being rewarded is larger than a single token. On the second point, approximately all commercial software consists of a big pile of chunks of code that are themselves rote and uninteresting on their own.
They may well end up at the right conclusion, but if you start out with false premises as the pillars of your analysis, the path that leads you to the right place can only be accidental.
The base model is a pure next token predictor. It just continues whatever prompt you give it — if you ask it a question, it might just keep elaborating the question. To turn these models into something that can actually chat (and more recently, that can do things like tool calls) they do a second phase of training, including reinforcement learning, which teaches the model to maximize some kind of reward signal meant to represent good answers of various kinds. This reward signal applies at the level of the whole response (or possibly parts of the response) so it is not predicting the most likely next token. I don’t know in an absolute sense how much this ends up changing the base model weights, and it’s surprisingly hard to find discussions of this, I guess because the state of the art is quite secret. But it’s clear that RL is important for getting the models to become useful.
There are other posttraining techniques that are not strictly speaking RL (again, not an expert) but it sounds to me like they are still not teaching straightforward next token prediction in the way people mean when they say LLMs can’t do X because they’re merely predicting the most likely next token based on the training corpus.
Thanks for the explanation and the link. I learned something today.
I'm definitely not an expert, but to me, RL and other techniques looks like a guide or a constraint on the still 'next token prediction' concept. What I do not get is - is this all about training? Or is this about inference.
In any case, this is still an eye opener and I need to study this a bit more.
When talking inference, models from huggingface are composed of what then? Because they can do angentic stuff, no?
I've quipped a lot here about s/AI/statistics/g, but the applications where that is most straightforwardly true are probably the most solid that are going to produce a lot of value over the long term.
Before computers came along, we really couldn't fit curves to data much beyond simple linear regression. Too much raw number crunching to make the task practical. Now that we have computers—powerful ones—we've developed ever more advanced statistical inference techniques and the payoff in terms of what that enables in research and development is potentially immense.
Yep. Right now it’s hard for biomed companies to compete on salary from the AI craze, but if the bubble bursts salaries will come back to down to earth. Deep/machine learning will, imo, prove to have large societal benefits over the next decade.
The bubble referenced in the article is $1 Trillion, compared to Google's $3 trillion market cap. And OpenAI / Anthropic legitimately compete with Google Search. I feel weirdly like AI's detractors are somehow drinking too much of the AI Kool-Aid. All AI has to do to justify these valuations is capture 1/3rd of Google. Unless Google is wildly overvalued, which it may be, but that's not a phenomenon that has anything to do with AI hype.
And there are legitimately applications beyond search, I don't know how big those markets are, but it doesn't seem that odd to suggest they might be larger than the search market.
Most of Google's value is the moat they've built around the things that bring in money... their advertising market, google play store, vertical integration, etc. See also Doctorow's Chokepoint Capitalism.
Building even a tiny fraction of those moats is mind-bogglingly difficult. Building a third of that moat is insanely hard. To claim that the AI industry's "expected endgame moat size" is one-third of Google's current moat is a ludicrous prediction. You'd be better off playing the lottery than making that bet.
I would be happy to bet against this if I could do it without making a Keynes-wager (that I can remain solvent longer than markets remain irrational), but I see no way to do so. Put options expire, futures can be force-liquidated by margin calls, and short sales have unlimited downside risk.
Is there a reason why AI cannot be far better than Google at providing results to queries?
Inherently, they are in the same business, but I am not very aware of any AI specifically aimed right at Google's business....... but it is completely logical that they would.
Furthermore, it appears that Google just sells off placement to the highest bidder, and these AIs could easily beat that by giving free AI access in exchange for nibbling at the queries and adding a tab of 'sponsored results'
Yeah it's like you have all these people staring at a little baby saying "theoretically, based on this baby's DNA, it could become the greatest basketball player of all time, if it trains for 4 hours a day everyday for the next 20 years, and suffers no injuries"
Meanwhile you have Lebron who is already the highest scoring player of all time, and he's still going out every night and putting up another 20 points.
Comparing potential to actual at 1:1 ratio is insane.
> Is there a reason why AI cannot be far better than Google at providing results to queries?
No, but there is a reason to suspect that other AI has a big challenge there, and that is that Google is a trillion dollar company who is a leader in both search and AI and whose investment in AI has always been significantly about both improving its ability to respond to queries and avoiding needing queries in favor of proactively supplying information.
And also that the most productive means of using AI to respond to queries about material facts continues to rely on something search-like supporting an LLM for grounding.
> Furthermore, it appears that Google just sells off placement to the highest bidder, and these AIs could easily beat that by giving free AI access in exchange for nibbling at the queries and adding a tab of 'sponsored results'
Google already provides free AI access, uses it by default to respond to most queries, and puts it above the sponsored results, with a link to go into a more focussed AI interface for further exploration.
Usually not - the people writing these comments have neither the understanding nor the courage of their conviction to bet based on their own analysis.
If they did, the articles would look less like “wow, numbers are really big,” and more like, “disclaimer: I am short. Here’s my reasoning”
They don’t even have to be short for me to respect it. Even being hedged or on the sidelines I would understand if you thought everything was massively overvalued.
It’s a bit like saying you think the rapture is coming, but you’re still investing in your 401k…
Edit: sorry to respond to this comment twice. You just touched on a real pet peeve of mine, and I feel a little like I’m the only one who thinks this way, so I got excited to see your comment
That sounds like a variation on: "If you're so smart, why aren't you rich?" which rests on some very shaky (yet comforting) set of assumptions in a "just world."
Heck, just look at yesterday: Myself and several million other people wouldn't have needed to march if smart people reliably ended up in charge.
I think it's more valuable to flip the lens around, and ask: "If you're so rich, why aren't you smart?"
Fair point - meaning, you can be right (and rich) but for the wrong reasons? Like… you can place your bet based on a coin flip and get it right without actually being smart?
While it seems foolish to discount all effect from individual agency or merit, we do know that random chance is sufficient to lead to the trends we see. [0] Much like how an iceberg always has some ~10% portion above the water: The top water molecules probably aren't special snowflakes (heh) compared to the rest, we're mostly just seeing What Ice Does.
Combine that with how humans seem hardwired to dislike/ignore random chance, and it's reasonable to think we overestimate the importance of personal qualities in getting rich. Consider how basically anyone flipping a coin starts thinking of of causal stories like "hot streaks" or "cold streaks" or "now I'm overdue for a different outcome", even when they already know it's 50/50.
________________
A simple trading simulation of equally-smart equally-lucky agents still demonstrates oligarchic outcomes [0]. When you also add a redistributing effect (like taxing the rich to keep the poor alive) it generates outcomes that resemble real-world statistics for different countries.
> If you simulate this economy, a variant of the yard sale model, you will get a remarkable result: after a large number of transactions, one agent ends up as an “oligarch” holding practically all the wealth of the economy, and the other 999 end up with virtually nothing.
> It does not matter how much wealth people started with. It does not matter that all the coin flips were absolutely fair. It does not matter that the poorer agent's expected outcome was positive in each transaction, whereas that of the richer agent was negative. Any single agent in this economy could have become the oligarch—in fact, all had equal odds if they began with equal wealth.
I like to think of becoming wealthy like catching a ball at an MLB game.
First, you have to show up at a game in person. No one watching the game on TV or ignoring it altogether is catching a ball.
Next, you have a greater chance at catching a ball if you bring a glove.
Then, it also helps your chances if you've practiced catching balls.
However, all of that preparation is for naught if a ball is never hit to you.
For every person who strikes it rich, there are hundreds if not thousands of people who were just as smart, worked just as hard, and did all the same right things, but they simply didn't make it.
or you could sell a single broad market etf lol. or buy a short etf.. it hasn't been hard to selectively exposure yourself to dang near any slice of equities since the ETF boom
Short ETFs are usually leveraged and make for a really good way to lose money.
Realistically, timing is the issue. "This is a bubble" is worth ~nothing. "This is a bubble and it will pop in late December" is worth a lot if you're correct.
I feel any naive question about investing ever can be answered with "markets can remain irrational longer than you can remain solvent"
The bubble is the manifestation of this concept. Things should be falling apart, yet they keep going up, for longer than it is reasonable; at some point, bearish investors lose so much money they decide it's better just to ride the wave up, growing the bubble even further, until it bursts and everybody loses.
There is a reason investors flock to gold during these times. The best move is not to play (though you don't want to hold too much cash either)
Market bubble is essentially a gambling event gone wrong. Shorting stock is widely recognized by people smarter than me, as high risk gambling, due multiple factors. So now please tell me, why would people concerned about gambling gone wrong, voluntarily engage in a reverse gambling themselves? Let imagine football and a spectator who is moderately in the know about this sport. He sees that multiple people are gambling large sums on the team he deems would likely lose. Why would such a person go and bet unreasonable sums on the opposite team, even if it's a likely win? It's still gambling and still not a reasonably defined event.
tl;dr - it is really tiring, reading these "clever" quips about "why won't you short then?", mainly because they are neither clever nor in any way new. We have heard that for a decade about "why won't you short BTC them?". You are not original.
> If you predicted next week's lottery numbers, I'd be very suspicious if you didn't buy a ticket
But that is not what is happening here, is it?
If you were able to predict a lotto number that has a high probability of appearing within the next 24 months, but each ticket cost $2000 to buy, would you still be suspicious?
I find that the people of the opinion "If you think this is a bubble, why aren't you shorting it" don't really have much of a grounding in statistics, especially with regard to EV.
I also find it odd that so many people saying "Why don't you short it" have never heard "The market can remain irrational longer than you can remain solvent."
This is also why all online stock pundits are full of shit. None of them will publicly disclose their P&L's from trading because they make most of their money from YouTube and peddling courses.
Why, yes I can, that's why I could not but wonder at your statement. I can see it as being directly exponential in burning through cash and GPUs and inversely exponential in terms of benefits delivered. What else do you have for us ?
Even if this is true, a possible takeaway is that after the bubble bursts and the dust settles, AI's effect will be 17 times stronger than that of the Internet...
Personally, I think it will end up being much higher, but that doesn't mean I'm going to invest in it any time soon
Bubble/Not bubble, what does that really change? The economy will rise and fall one way or another; it is really in cycles. If the bubble pops, it will be a sharper fall. Unless you own AI, tech stocks - probably not a big deal
It’s disingenuous because since the dotcom bubble there has been at least 2x inflation, and then on top of that the tech market has expanded a lot more than what it was in 1999, so of course it will be bigger. This is nothing.
It's not a bubble yet. Many companies are already getting direct value out of AI. The Dot com burst happened because there were lots of unsustainable business models. I don't see them as equal.
> Despite $30–40 billion in enterprise investment into GenAI, this report uncovers a surprising
result in that 95% of organizations are getting zero return
> Many companies are already getting direct value out of AI.
Many companies were already getting direct value out of the internet during the dotcom bubble. Bubbles do not require the absence of real value being delivered by the bubble industry, they require levels of investment that are anticipate more real value that can be delivered on a time frame for existing valuations across the industry to be sustainable.
> The Dot com burst happened because there were lots of unsustainable business models.
There are lots of unsustainable business models in the AI space, too.
If you are looking at OpenAI, Google, and Anthropic (even though they, too, may be somewhat inflated), you are making the same mistake as looking at Google (ironically) during the dotcom bubble.
Meta was profitable long before they went public and never had any significant amount of losses and Amazon had profitable unit economics and they were investing in real things like warehouses.
But still that is the ultimate survivorship bias. Is each new customer that Cursor has bringing in more money than they cost Cursor?
if we learned anything over the last decade or so it is that profitability is absolutely irrelevant. just look at UBER… value is the only thing that matters, you can be significantly unprofitable for a very, very long time
True. But it’s the whole idea that if they lose a lot of money now, they will definitely be successful. This is the thought process these let’s startups - especially a lot of the YC companies - to underpay developers and give them equity that will statistically be worthless.
Market Analyst, perhaps?