Vibe coding pushes errors rightward, but using AI to speed up typing or summarizing documentation doesn’t. Vibe coding will fail, but that doesn’t mean using AI to code will fail. You’re looking at one (admittedly stupid) use case and generalizing too hastily.
If I have an LLM fix a bug where it gets the feedback from the type checker, linter and tests in realtime, no errors were pushed rightward.
It’s not a free lunch though. I still have to refactor afterwards or else I’ll be adding tech debt. To do that, I need to have an accurate mental model of the problem. I think this is where most people will go wrong. Most people have a mindset of “if it compiles and works, it ships.” This will lead to a tangled mess.
Basically, if people treat AI as a silver bullet for dealing with complexity, they’re going to have a bad time. There still is no silver bullet.
If I have an LLM fix a bug where it gets the feedback from the type checker, linter and tests in realtime, no errors were pushed rightward.
It’s not a free lunch though. I still have to refactor afterwards or else I’ll be adding tech debt. To do that, I need to have an accurate mental model of the problem. I think this is where most people will go wrong. Most people have a mindset of “if it compiles and works, it ships.” This will lead to a tangled mess.
Basically, if people treat AI as a silver bullet for dealing with complexity, they’re going to have a bad time. There still is no silver bullet.