I worry people are lacking context about how SaaS products are purchased if they think LLMs and "vibe coding" are going to replace them. It's almost never the feature set. Often it's capex vs opex budgeting (i.e., it's easier to get approval for a monthly cost than a upfront capital cost) but the biggest one is liability.
Companies buy these contracts for support and to have a throat to choke if things go wrong. It doesn't matter how much you pay your AI vendor, if you use their product to "vibe code" a SaaS replacement and it fails in some way and you lose a bunch of money/time/customers/reputation/whatever, then that's on you.
This is as much a political consideration as a financial one. If you're a C-suite and you let your staff make something (LLM generated or not) and it gets compromised then you're the one who signed off on the risky project and it's your ass on the line. If you buy a big established SaaS, do your compliance due-diligence (SOC2, ISO27001, etc.), and they get compromised then you were just following best practice. Coding agents don't change this.
The truth is that the people making the choice about what to buy or build are usually not the people using the end result. If someone down the food chain had to spend a bunch of time with "brittle hacks" to make their workflow work, they're not going to care at all. All they want is the minimum possible to meet whatever the requirement is, that isn't going to come back to bite them later.
SaaS isn't about software, it's about shifting blame.
There's little to no evidence that companies are actually doing layoffs to focus on "AI-enabled" work.
All there is are layoffs because of interest rates and concerns about the economic outlook. Companies using "AI" as a fig leaf justification and people are apparently falling for it.
> Reviews are billed on token usage and generally average $15–25, scaling with PR size and complexity.
You've got to be completely insane to use AI coding tools at this point.
This is the subsidised cost to get users to use it, it could trivially end up ten times this amount. Plus, you've got the ultimate perverse incentive where the company that is selling you the model time to create the PRs is also selling you the review of the same PR.
The bet is that compute gets cheap enough before the crunch that it won't matter. You should model it at 10x - but you also need to factor in NPV and opportunity cost. Even if pricing spikes later, the value extracted at today's rates might still put you ahead overall.
The relevant comparison for most enterprise isn't whether $15/PR is subsidised - it's whether it beats the alternative. For most shops that's cheap offshore labour plus the principal engineer time spent reviewing it, managing it, and fixing what got merged anyway. Most enterprise code is trivial CRUD - if the LLM generates it and reviews it to an equivalent standard, you're already ahead.
> At work, all that matters is that value is delivered to the business. Code needs to be maintainable so that new requirements can be met. Code follows design patterns, when appropriate, because they are known solutions to common problems, and thus are easy to talk about with others. Code has type systems and static analysis so that programmers make fewer mistakes.
This is a narrow view of software engineering. Thinking that your role is "code that works" is hardly better than thinking you're a "(human) resource that produces code". Your job is to provide value. You do that by building knowledge, not only of the system you're developing but of the problem space you're exploring, the customers you're serving, the innovations you can do that your competitors can't.
It's like saying that a soccer player's purpose is "to kick a ball" and therefore a machine that launches balls faster and further than any human will replace all soccer players, and soon all professional teams will be made up of robots.
I think your view is sentimental. For businesses the code usually IS the value, and devs ARE human resources that produce code. It sounds cynical, but it’s basically how most orgs operate. From the company’s POV employees function as cogs in a larger system whose purpose is to generate value considering that businesses are structured to optimize outcomes i.e. Profit. If tech appears that can produce the same output more cheaply or efficiently, companies will most definitely as we've seen so far explore replacing people with it. I mean take a look at corporate posture around LLMs. But do I get the point you’re making about knowledge, domain understanding, and solving real problems because those things clearly matter in practice but from the company’s pov, they matter only because they help produce better code/systems which are still the concrete artifact that embodies the business logic and operations. A symbolic model of the business itself encoded in software. So the framing of devs as human resources that produce code and code as the primary value correctly describes how many businesses see the relationship. And I don't really see the equivalence between SWE-ing in a business context and sports
> From the company’s POV employees function as cogs in a larger system whose purpose is to generate value considering that businesses are structured to optimize outcomes i.e. Profit. If tech appears that can produce the same output more cheaply or efficiently, companies will most definitely as we've seen so far explore replacing people with it.
Businesses wish this were the case, and many will even say it or start to believe it. But it doesn't bare out to be true in practice.
Think about it this way, engineers are expensive so a company is going to want to have as few of them as possible to do as much work as possible. Long before LLMs came along there have been many rounds of "replace expensive engineers" fads.
Visual programming was going to destroy the industry, where any idiot could drag and drop a few boxes and put together software. Turns out that didn't work out and now visual programming is all but dead. Then we had consultants and software consultancies. Why keep engineers on staff and have to deal with benefits and HR functions when you can hire consultants for just long enough to get the job done and end their contracts. Then we had offshoring. Why hire expensive developers in markets like California when you can hire far cheaper engineers abroad in a country with lower wages and laxer employment law. (It's not a quality thing either, many of these engineers are unquestionably excellent.)
Or, think about what happens when software companies get acquired. It's almost unheard of for the acquiring company to layoff all of the engineering staff from the acquired company right away, if anything it's the opposite with vesting incentives to convince engineers to stay.
If all that mattered was the code and the systems, and people were cogs that produced code that businesses wanted to optimise, then none of these actions make sense. You'd see companies offshore and use consultants with the company that does "good enough" as cheaply as possible. You'd see engineers from acquisitions be laid off immediately, replaced with cheaper staff as fast as possible.
There are businesses like that operate like this, it happens all the time. But, all of the most successful and profitable tech companies in the world don't do this. Why?
>If all that mattered was the code and the systems, and people were cogs that produced code that businesses wanted to optimise, then none of these actions make sense.
No, No... Of course all that matters isn't just the code. My framing was about how organizations model the work SWE do economically.
>Visual programming was going to destroy the industry, where any idiot could drag and drop a few boxes and put together software. Turns out that didn't work out and now visual programming is all but dead. Then we had consultants and software consultancies. Why keep engineers on staff and have to deal with benefits and HR functions when you can hire consultants for just long enough to get the job done and end their contracts. Then we had offshoring. Why hire expensive developers in markets like California when you can hire far cheaper engineers abroad in a country with lower wages and laxer employment law. (It's not a quality thing either, many of these engineers are unquestionably excellent.)
It seems like we're agreeing along the same tangent. With this argument, you're admitting that businesses do see SWE as cogs in a wheel and seasonally try to replace them... The seasonality of 'make the engineer replaceable' fads really does point to businesses trying to simplify what devs actually do since most of what they measure is working code output because it’s a tangible artifact (this is waht the OP meant by implying being a working code producer at work). Knowledge, judgment, architectural intuition, and domain understanding are harder to quantify, so they disappear from the model even though they ARE the real constraint. So for the record, I do agree with you that code isn't everything but I maintain that SWEs are modelled based on working codes produced even in more successful companies that invest in domain knowledge and long-term system understanding.
Metrics, performance reviews, sprint velocity, delivery timelines, all orbit around observable artifacts because those are what management systems can actually track objectively and equitably. It's a handy abstraction just like looking only at the ins/outs of a logic gate as opposed to looking at the implementation and wiring. Of course, a NOT gate would get upset over being called a 'bit flipper', it's not all thar physically exists but from our POV, it doesn't exactly matter. This applies to human labor even if a leaky abstraction w
> you're admitting that businesses do see SWE as cogs in a wheel and seasonally try to replace them...
Not quite. I agree that companies will try to do this, but every company that has tried to treat engineering staff as replaceable units of person-hours has failed.
> Metrics, performance reviews, sprint velocity, delivery timelines, all orbit around observable artifacts because those are what management systems can actually track objectively and equitably. It's a handy abstraction just like looking only at the ins/outs of a logic gate as opposed to looking at the implementation and wiring.
Yes, and these metrics are, usually, worthless.
It's not that companies and managers will not try to replace engineers with AI. I'm sure they will. I'm sure many will be laid off because "AI does it cheaper now".
My point is that companies that have gone down this route in the past have failed, and AI is no different. Companies that lean strongly into AI as a workforce replacement will fail too.
lol but you have to first 'view' something as replaceable before yu try to replace it, no? So companies DO see SWEs as cogs and try but fail to actually make them replaceable, yes?
It's not even as simple as "views as replaceable". It's pure economics. It's someone looking at a spreadsheet going "We spent a lot of money on SWE salaries, our financial results look better if we fire some of them. Is there a cheaper option?"
From that perspective, yes some management view SWE as replaceable. My argument is that all attempts to actually implement that have failed to date, and the most successful financial companies are staffed by upper management who know that to remove much of the SWE staff would doom the company in the medium term.
It's a move of either desperation ("we'll go bankrupt if we don't do this"), or short-sightedness ("if I cut 40% of headcount, our P&L will be better, which will result in better quarterly results, which is likely to increase share price, which gives me a bigger performance bonus. Who cares what happens after that."), or a lack of experience in managing software companies and watching this play out before.
AI, even if it lives up the hype, is no different.
Listen, If you truly want help you've made the first step by realising what's wrong, but you won't get help here.
This community is obsessively pro-AI. Asking here is the equivalent of asking the guy who has sat at the slot machine next to you for the past three hours if he thinks you have a gambling problem. Of course he's going to say "no" or try to justify it, to do otherwise would be to admit to himself that he has a problem.
I don't have advice for you, other than to look up what gambling, drug, or alcohol addicts do. The path to recovery for all addiction is long and painful, but it can be done. Good luck.
> AI is actually better getting those built as long as you clean it up afterwards
I've never seen a quick PoC get cleaned up. Not once.
I'm sure it happens sometimes, but it's very rare in the industry. The reality is that a PoC usually becomes "good enough" and gets moved into production with only the most perfunctory of cleanup.
One trick for avoiding this is to use artifacts in the PoC that no self-respecting developer would ever allow in production. I use html tables in PoCs because front-end devs hate them - with old-school properties like cellpadding that I know will get replaced.
I also name everything DEMO__ so at least they'll have to go through the exercise or renaming it. Although I've had cases where they don't even do that lol. But at least then you know who's totally worthless.
I've been in this game a long time and I've seen a lot, but this AI hype cycle is exhausting. Like no technology before it I've watched extremely smart and capable engineers fall into AI like it's a cult. I've had colleagues and friends I've known for years drop head first into this shit.
At first I was interested in the tech, I deep-dived into it. Understood as much as I could. I understand how an LLM works and what it can and can't do. So, I realised pretty quickly that their use is limited. I figured it would blow over in a few years, the real use cases would be weeded out, and we'd all move on to the next thing like normal.
What I didn't account for is how addictive this technology is. The moment something "feels" like a person it's ascribed magical qualities, and people fall for it. Anyone can, doesn't matter how smart you are.
For the past six months I've felt nothing beyond a deep melancholic sadness. Not that my industry is changing, it isn't. Not really. These models will not replace people, and anyone who thinks they can is either trying to sell you something or is delusional. The readjustment and the end of the hype cycle will come eventually. But, I fear many people will never be able to let it go. I'm saddened that we're going to lose a generation of brilliant people to fiddling with token predictors, and many of them will never recover from it.
AI will set the industry back twenty years. Not because we will be replaced, but because so many people will be dragged into psychosis and addiction or waste decades chasing the future on a lie.
And there's nothing any of us can do about it now.
No. Software is being centralized. If the snake oil AI companies are selling about the coming agentic age were true, then the end result is not "anyone can produce software" it is "anyone stupid enough to rent the ability to run their business from an AI vendor can produce software".
> Coding agents are here to stay, and you’re a fool if you don’t use them.
Why would they be here to stay? The crux of the author's argument is that using them is detrimental in the long term. The correct response to that is not a lukewarm response of "maybe do some coding now and again", it is "don't use tools that make you worse".
> LLMs are clearly a massive productivity boost for software developers, and the value of humans manually translating intent into lines of code is rapidly depreciating.
This take is so divorced from reality it's hard to take any of this seriously. The evidence continues to show that LLMs for coding only make you feel more productive, while destroying productivity and eroding your ability to learn.
Re productivity: the METR study is seriously flawed overall, and:
1. if you disaggregate the highly aggregated data, it shows that the slowdown was highly dependent on task type, and tasks that required using documentation or novel tasks were possibly sped up, whereas ones the developers were very experienced with were slowed down, which actually matched the developers' own reports
2. developers were asked to estimate time beforehand per-task, but estimate whether they were sped up or slowed down only once, afterwards, so you're not really measuring the same thing
3. There were no rules about which AI to use, how to use it, or how much to use it, so it's hard to draw a clear conclusion
4. Most participants didn't have much experience with the AI tools they used (just prompting chatbots), and the one that did had a big productivity boost
5. It isn't an RCT.
See [1] for all.
The Anthropic study was using a task far too short to really measure productivity (30 mins), and furthermore the AI users were using chatbots, and spent the vast majority of their time manually retyping AI outputs, and if you ignore that time, AI users were 25% faster[2], so the study was not a good study to judge productivity, and the way people quote it is deeply misleading.
Re learning: the Anthropic study shows that how you use AI massively changes whether you learn and how well you learn; some of the best scoring subjects in that study were ones who had the AI do the work for them, but then explain it afterward[3].
> We observed that participants who had access to the AI
assistant were more likely to introduce security vulnerabilities for
the majority of programming tasks, yet were also more likely to rate
their insecure answers as secure compared to those in our control
group.
That’s the conclusion you get when you sit in the board of 20 companies where all the CEOs are telling you the same thing but you don’t understand that you are all just selling the same golden shovel to each other. Obviously this can also be backed by their own experiences too: 100% of code is written by AI, because last time they actually wrote code was in 2010.
Companies buy these contracts for support and to have a throat to choke if things go wrong. It doesn't matter how much you pay your AI vendor, if you use their product to "vibe code" a SaaS replacement and it fails in some way and you lose a bunch of money/time/customers/reputation/whatever, then that's on you.
This is as much a political consideration as a financial one. If you're a C-suite and you let your staff make something (LLM generated or not) and it gets compromised then you're the one who signed off on the risky project and it's your ass on the line. If you buy a big established SaaS, do your compliance due-diligence (SOC2, ISO27001, etc.), and they get compromised then you were just following best practice. Coding agents don't change this.
The truth is that the people making the choice about what to buy or build are usually not the people using the end result. If someone down the food chain had to spend a bunch of time with "brittle hacks" to make their workflow work, they're not going to care at all. All they want is the minimum possible to meet whatever the requirement is, that isn't going to come back to bite them later.
SaaS isn't about software, it's about shifting blame.
reply