Hacker Newsnew | past | comments | ask | show | jobs | submit | hard24's commentslogin

"No one cares about handcrafted artisanal code as long as it meets both functional and non functional requirements"

Speak for yourself. I don't hire people like you.


And guess what? You probably don’t pay as much as I make now either…

Even in late 2023 with the shit show of the current market, I had no issues having multiple offers within three weeks just by reaching out to my network and companies looking for people with my set of skills.


I field a small team of experts who are paid upwards of a million GBP in cold-hard cash in London. Not stock. Cash.

You sound like a bozo, I can sniff it through my screen.


This sounds like a place I want to work at.


Indeed. My view as a CEO is, if you are still reviewing the code yourself then what use is it that you can produce a bunch of text at a faster rate?

I'd prefer people wrote good quality code and checked it as they went along... whilst allowing room for other stuff they didn't think of to come to the front. The production process of using LLMs is entirely different, in its current state I don't see the net benefit.

E.g. if you have a very crystalised vision of what you want, why would I want an engineer to use an LLM to write it, when the LLM can't do both raw production and review? Could this change? Sure. But there's no benefit for me personally to shift toward working that way now - I'd rather it came into existence first before I expose myself to incremental risk that affects business operations. I want a comprehensive solution.


You should lay off your engineering team and do it all in Lovable amigo.


Where are you CEO?


At a shitty company. The problem is - you cannot ship a large amount of code quickly in a perfect way. Positioning the problem as "what's the point of generating all this code so fast if I still need a warm body at the end making sure it's OK?" is hilarious.

Don't do that. Just ship it. Yes, good tests, linting, etc will help but if you really believe you don't need humans in the loop at all, at least for the time being, you are fucked.

But go ahead, buy the hype. Your agent swarm can build an operating system in 15 minutes and everything will just work. Cool.


Edit- I disagree with you, didn’t realize you weren’t Op


Also wow you gutted your original comment


Also when you are writing code yourself you are implicitly checking it whilst at the back of your mind retaining some form of the entire system as a whole.

People seem to gloss over this... As a CEO if people don't function like this I'd be awake at night sweating.


That’s the reverse-centaur issue I see: humans are not great at repetitive nuanced similar seeming tasks, putting the onus on humans to retroactively approve high volumes of critical code has them managing a critical failure mode at their weakest and worst. Automated reviews should be enhancing known good-faith code, manual reviews of high volume superficially sound but subversive code is begging for issues over time.

Which results the software engineering issue I’m not seeing addressed by the hype: bugs cost tens to hundreds of times their coding cost to resolve if they require internal or external communication to address. Even if everyone has been 10x’ed, the math still strongly favours not making mistakes in the first place.

An LLM workflow that yields 10x an engineer but psychopathically lies and sabotages client facing processes/resources once a quarter is likely a NNPP (net negative producing programmer), once opportunity and volatility costs are factored in.


> Even if everyone has been 10x’ed, the math still strongly favours not making mistakes in the first place

The math depends on importance of the software. A mistake in a typical CRUD enterprise app with 100 users has zero impact on anything. You will fix it when you have time, the important thing is that the app was delivered in a week a year ago and was solving some problem ever since. It has already made enormous profit if you compare it with today’s (yesterday’s ?) manual development that would take half a year and cost millions.

A mistake in a nuclear reactor control code would be a total different thing. Whatever time savings you made on coding are irrelevant if it allowed for a critical bug to slip through.

Between the two extremes you thus have a whole spectrum of tasks that either benefit or lose from applying coding with LLMs. And there are also more axes than this low to high failure cost, which also affect the math. For example, even non-important but large app will likely soon degrade into unmanageable state if developed with too little human intervention and you will be forced to start from scratch loosing a lot of time.


I have found ai extreemly good at finding all those really hard bugs though. Ai is a greater force multiplier when there is a complex bug than in gneen field code.


Sortof. I work on a system too large for anyone to know the whole thing. Often people who don't know each other do something that will break the other. (Often because of the number of different people - most individuals go years between this)


No I’m keeping up with the system as a whole because I’m always working at a system level when I’m using AI instead of worrying about the “how”


No you’re not. The “how” is your job to understand, and if you don’t you’ll end up like the devs in the article.

We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMs give you the illusion of this.


No in my case the “how” is

1. I spoke to sales to find out about the customer

2. I read every line of the contract (SOW)

3. I did the initial requirements gathering over a couple of days with the client - or maybe up to 3 weeks

3. I designed every single bit of AWS architecture and code

4. I did the design review with the client

5. I led the customer acceptance testing

> We as an industry have been able to offload a lot of “how” via deterministic systems built by humans with expert understanding. LLMs

I assure you the mid level developers or god forbid foreign contractors were not “experts” with 30 years of coding experience and at the time 8 years of pre LLM AWS experience. It’s been well over a decade - ironically before LLMs - that my responsibility was only for code I wrote with my own two hands


Yes, and trusting an LLM here is not a good idea. You know it will make important mistakes.

I’m not saying trusting cheap devs is a good idea either. I do think cheap devs are actually at risk here.


I am not “trusting” either - I’m validating that they meet the functional and non functional requirements just like with an LLM. I have never blindly trusted any developer when my neck was the one on the line in front of my CTO/director or customer.

I didn’t blindly trust the Salesforce consultants either. I also didn’t verify every line of oSql (not a typo) they wrote.


Actually, it's SOQL. I did Salesforce crap for many years.


This is incredibly circular lol...


"So will it turn out that actually writing code was never the time sink in the first place?"

Of course it wasn't! Do you think people can envision the right objects to produce all the time? Yeah.. we have a lot of Steve Jobs walking around lol.

As you say, there's 'other stuff' that happens naturally during the production process that add value.


I think as long as having to review code stays around, the 'artistry' of writing code isn't going away.

Think about it - how do you increase the speed at which one can review code? Well first it must be attractive to look at - the more attractive the faster you review/understand and move through the review. Now this won't be the case everywhere - e.g. in outsourced regions the conditions will force people to operate a certain way.

Im not a SWE by trade, I just try to look at things from a pragmatic stand-point of how org's actually make incremental progress faster.


The better looking the code, the less effort people will put into reviewing it due to the ease of reading it - the assumption being that what is beautiful is good. Just as a beautiful facade of a building can hide a cheap structure behind it, the same is true with code. Beauty itself is not a good signal for goodness as in excess it is in effect a rhetoric device that aims to mislead and draw ones eyes towards itself and away from what lies beneath it.

A beautiful building is only as good as the correctness of its foundation, framework, materials, and construction. Those qualities can only be assessed by those with expertise enough to understand their importance. Beauty in its proper place is the output of the intersection between a craftsman and a engineer. Beauty is optional, but it makes life more worth living. The same is true for code - attractive code is optional, but it makes being a SWE more rewarding.


My prediction is a concorde-like incident is going to shatter trust and make people re-think their expectations of the capabilities of LLMs and their abilities of the present.

Essentially something big has to happen that affects the revenue/trust of a large provider of goods, stemming from LLM-use.

They wont go away entirely. But this idea that they can displace engineers at a high-rate will.


Assuming you mean this crash [0], it reads to me more like a confluence of bad events versus a big fundamental design flaw in the THERAC-25 mold.

I feel the current proliferation of LLMs is going to resemble asbestos problem: Cheap miracle thingy, overused in several places, with slow gradual regret and chronic harms/costs. Although I suppose the "undocumented nasty surprise" aspect would depend on adoption of local LLMs. If it's a monthly subscription to cloud-stuff, people are far less-likely to lose track of where the systems are and what they're doing.

[0] https://en.wikipedia.org/wiki/Air_France_Flight_4590


Like bombing a building full of little kids? Oops too late...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: