I like Ed, and I do think there's tons of fishy and downright illegal behavior in the AI ecosystem, and while I want Ed to be right (a blindspot I'm trying to mitigate) I think Ed is missing two things.
1. I don't know exactly what it is, maybe the circumstances are just different, but it feels like after 2008 somehow the financial system learned not to collapse. And this isn't a good thing. Inflation, rising unemployments rates, low investment and few companies going public, maybe these are all symptoms of an economic system that needs to be cleaned out. And this is on top of the general advice that the market can stay irrational longer than you can stay solvent.
It’s probably true that as soon as we get too comfortable and this expectation sets in, we’ll have the collapse, but that could be years off.
2. There's loosely speaking two ways for a tool to be successful. One is to apply the tool to a problem, which is the lens Ed looks through, the second is to apply a problem to a tool. Even if loosely speaking LLMs "suck and are useless" as a Ed's thesis goes, you can sort of set the expectation that you'll deal with the problem space in a shitty way. Lower expectations.
Well trained humans will always be better than LLMs at customer service. Who cares? Just lower the quality of customer service. LLMs aren't quite there in writing code? Just keep looping over the problem until they get something 98% of the way there. You can redefine the problem space in a way that makes LLMs the solution, and then undercut the competition on price.
We do this in manufacturing all the time. It’s really hard to build a machine that can assemble a whole car the way a small team of people can, but it’s relatively easy to build a machine that can put a door on a car.
“Do you need a senior engineer to review every AI-generated change?“
I dunno, I vibe code a ton. But I’m not in the product path.
Today I spent all day interacting with a _very_ popular AI native SAAS tools admin panel, and it was _awful_. It looks beautiful, but in the past 24 hours it deleted my entire config, the data model is unintelligible the UI has all sorts of quirks that cause it to misfire, and trying to re—add the config it added was flaky. When I say the data model was unintelligible, like it conflated the idea of a user and a team and you literally couldn’t tell which you were interacting with, and if you were changing your own settings or the settings across the organization.
So I don’t know if we need code craftsmanship again where every line of code is specially placed, but like vibe coding straight to production is having extremely predictable results. How went from a world from careful incremental change management, where the lines of code were never the bottleneck! To full speed ahead, fuck it.
The problem is that I think it affects your culture. We’re inherently valuing a culture that values speed over everything. I don’t hear hardly anyone talking about deploying safely, or change management, it’s ALL speed now. And I want speed! But like we used to casually glance towards discipline.
Were in for the golden age of cyberattacks, let me tell you.
I don't disagree that organizations are probably making that tradeoff in an uninformed way.
As a counterpoint -- it's rare I've seen a new UX issue fixed with a PR.
If you valued quality you could feature-flag some different improvements, get feedback and refine. AI is great at this. We have CS directly submitting Pull Requests now... and they're not junk, 95% of the the time they want things fixed/correct. And it's stuff that usually would sit on the backlog forever. The quality has gone up.
Your experience is representative I'm sure - but I do think there is a way to get this right and those that do will see a big upside.
> Were in for the golden age of cyberattacks, let me tell you.
Do they want people to have a positive view of America? Because it’s kind of hard to tell. I almost thought they wanted people to have a negative view of America to have an isolationism feedback loop. Why would you recruit Musk if your goal was to improve people’s view of Anerica. Maybe that is the goal? Sow further division?
Weve stopped worried what unaccountable social media ceos might be doing to manipulate the media environment with mark Z as an advisor to Trump? How is legacy media oppressive but social media algorithms freedom?
Social media is a regulated space, it’s not a free market of ideas, it’s a particular viewpoint shaped by like 5 people. How can you be so skeptical of everything else but be like “stop it guys, Zuck’s got our best interests at heart”
How would like regulating more transparency in social media algorithms violate your right to free speech?
What if we just made it so social media companies had to give you more control of the algorithm?
Because we’re entering this era where the only freedom we’re recognizing is the freedom to be miserable at some extreme end, “freedom” is just a euphemism for neglect.
To elaborate, people often see freedom as an unqualified good, but I think in a sense the opposite of freedom isn’t slavery but purpose.
Everyone talks about the sense of meaning that came from fighting world war 2 and that was a time when freedom in the world was extremely precarious. There was a draft.
Purpose turns into slavery when you only have one purpose and people use violence to enforce it.
Getting married, having kids, buying a house, having a job. These are all constraints on a persons freedom.
People dont have the power to control their own fate, only physics does.
But we’re never truly free, there’s a society already built so it’s not really the choice between freedom and oppression, it’s always a specific set of freedoms and a specific set of oppressions, so if you close off all the doors people can use and leave only the traps; it’s not freedom to choose whatever trap you want.
But really, you look around and you see freedom? Really? Sorry for thinking bigger than that.
Maybe the opposite of what I said is true, that we’re so not free right now people are holding onto whatever freedoms they can not realizing the way they impact the bigger systems
Maybe what we need is not just specific interventions but antritrus law. Less control by any big organization, governmental or otherwise.
Do you never wonder if we’ve gone too far though, out of fear of relatively accountable governments manipulating the flow of information, we’ve put like five capitalists in charge of the flow of all information. Sure, there is a danger in giving the government more power, and that danger is not theoretical. But we’re in a dangerous situation right now!
It feels like we’re free soeeching ourselves into the most constrained and controlled free speech environment in history.
If freedom of speech is supposed to do something like create an environment where people can present alternate ideas and speak out against the government, social media is raising the noise floor so much that’s basically impossible.
Aside from that that know one seems concerned that Zuckerberg is on like an advisory panel on AI reporting directly to Trump? The fox is in the hen house.
1. I don't know exactly what it is, maybe the circumstances are just different, but it feels like after 2008 somehow the financial system learned not to collapse. And this isn't a good thing. Inflation, rising unemployments rates, low investment and few companies going public, maybe these are all symptoms of an economic system that needs to be cleaned out. And this is on top of the general advice that the market can stay irrational longer than you can stay solvent.
It’s probably true that as soon as we get too comfortable and this expectation sets in, we’ll have the collapse, but that could be years off.
2. There's loosely speaking two ways for a tool to be successful. One is to apply the tool to a problem, which is the lens Ed looks through, the second is to apply a problem to a tool. Even if loosely speaking LLMs "suck and are useless" as a Ed's thesis goes, you can sort of set the expectation that you'll deal with the problem space in a shitty way. Lower expectations.
Well trained humans will always be better than LLMs at customer service. Who cares? Just lower the quality of customer service. LLMs aren't quite there in writing code? Just keep looping over the problem until they get something 98% of the way there. You can redefine the problem space in a way that makes LLMs the solution, and then undercut the competition on price.
We do this in manufacturing all the time. It’s really hard to build a machine that can assemble a whole car the way a small team of people can, but it’s relatively easy to build a machine that can put a door on a car.
reply