Hacker Newsnew | past | comments | ask | show | jobs | submit | bloppe's commentslogin

Yes, both the article and GP are making that exact point about it mattering from a customer's perspective.

I mostly agree with this. Part of the confusion with the discourse around AI is the fact that "software engineering" can refer to tons of different things. A Next.js app is pretty different from a Kubernetes operator, which is pretty different from a compiler, etc.

I've worked on a project that went over the complexity cliff before LLM coding even existed. It can get pretty hairy when you already have well-established customers with long-term use-cases that absolutely cannot be broken, but their use-cases are supported by a Gordian Knot of tech debt that practically cannot be improved without breaking something. It's not about a single bug that an LLM (or human) might introduce. It's about a complete breakdown in velocity and/or reliability, but the product is very mature and still makes money; so abandoning it and starting over is not considered realistic. Eager uptake of tech debt helped fuel the product's rise to popularity, but ultimately turned it into a dead end. It's a tough balancing act. I think a lot of LLM-generated platforms will fall eventually into this trap, but it will take many years.


That's not how bonds work

The graph should really use a log scale. At this point, a 50% drop in value would look tiny on that graph with the linear scale.

Not if the tax system is well designed

Car safety is a bad counterexample because the risk is otherwise often externalized i.e. your car can easily hurt a total stranger whereas the consequences of your choice in laptop are strictly personal. And as GP stated, regulating this sort of thing would definitely force a particular trade-off on everyone. A lot of people would be pissed to have MacBooks with worse "build quality" even if they were more reparable. Having a choice is better.

I disagree. The lack of repairability has external costs not born by the purchaser or the manufacturer -- more toxic trash unnecessarily added to the environment.

Forcing a particular trade-off on everyone is entirely the point. It's the point of car safety, it's also the point of minimum warranties, electrical emission regulations, safety standards, etc.


Does this also mean only using "standard" parts? Or does the manufacturer have to over-produce the parts for, lets say 7 years, and then warehouse and ship those parts, probably multiple times. Or keep a low rate production line running for 7 years? What happens to the parts that don't get used? Are they scrapped?

That "what if" cost is going to be built into the cost of the laptop. Repairability doesn't always keep the cost low. The purchaser will definitely have to foot the cost otherwise it isn't sustainable.


> Does this also mean only using "standard" parts? Or does the manufacturer have to over-produce the parts for, lets say 7 years, and then warehouse and ship those parts, probably multiple times. Or keep a low rate production line running for 7 years? What happens to the parts that don't get used? Are they scrapped?

None of that is relevant in this context: The parts are available, but the laptop is designed and built such that the alone keyboard cannot be replaced.[1]

[1] Not sure if this is possible on that specific laptop, but with a steady hand, a tiny drill, maybe a magnifiying glass too, you can maybe drill out the rivets, then replace the keyboard, then either re-rivet it back again or tap very tiny thread into the laptop and use screws.


The laptop is deffinitely designed in a way that the keyboard is extremely hard to replace. Took me like 5 hours across 2 days. Rivets are not even the worst part, I used tiny drill and carefully glued in the replacement keyboard using phone screen glue (B7000) between the keys. (glue needs to go both on the frame and on the keyboard as there is a gap that needs to be bridged) Since there are screws along 3 of its edges, I deemed it good enough. drilling and tapping or riveting would have been extra painful.

What makes the repair more complicated is that 1) you need to take out basically everything to get to the keyboard. There are many different screws, luckily ifixit has a disassembly guides with their sizes. Still it was a bit painful to reassemble. 2) One of the things you need to take out or at least lift is the glued in battery - this took a lot of careful prying with thin plastic sheet and dousing it in ipa. 3) backlight is glued on to the case in an extremely fragile way, so it needs to be replaced with the keyboard or will probably look uneven after repair. (i reused the old one as I don't mind it but still, it could just have been glued to the keyboard itself and it would be easier to repair.


> Does this also mean only using "standard" parts? Or does the manufacturer have to over-produce the parts for, lets say 7 years

Why not? I don't understand how it's legal for manufacturers to produce absolute trash that can't be replaced and will just end up in a landfill. I think 7 years is far from enough, but because computers evolve quickly maybe 15 years is ok. For the rest of electro-mechanical goods, 50 years should be the baseline.

If a car or fridge from 50 years ago is still working with proper maintenance, that should be the minimum to be expected from products released today.


Repairability definitely doesn't keep the costs low. If it was cheaper and easier, it wouldn't have to be regulated. As for supply chain management, companies that get that equation correct are going to benefit. Which is exactly how it should be.

We define the rules of the game and companies that can best implement those rules will succeed. That is capitalism.


It won’t self resolve because consumers don’t fully factor in every detail while buying, and they often don’t get such granular choice anyway.

It’s easier and more profitable for companies to make a product that catastrophically fails around about when the new model is out. So that’s what they do. Until just now when the EU is reeling them back in line.


It's much more effective and economically efficient to deal with externalized pollution costs with deposits to incentivize proper disposal.

A ton of normal users will simply never bother to repair their own laptops no matter how easy it is, but you don't even have to recycle your own bottles and cans to see the effectiveness of bottle deposits work. Someone will usually come and recycle them for you in any big city.


> It's much more effective and economically efficient to deal with externalized pollution costs with deposits to incentivize proper disposal.

Or to just mandate devices that doesn't need to be dispose so often.

> A ton of normal users will simply never bother to repair their own laptops no matter how easy it is

Doesn't matter, because simplicity contributes directly to prize and when you can get your existing device fixed for cheaper than getting a new one, you likely will do it.


I already pay a deposit and "recycle" all my electronics. And some recycled electronics are already repaired and repurposed. If that was easier, more electronics would get a second chance at life.

Right now if you have two broken MacBook Neos, one with a broken motherboard and the other with a broken screen, you can make one working MacBook Neo without even needing to solder anything in just the time to takes disassemble both and reassemble one (which has been demonstrated in minutes).


> A lot of people would be pissed to have MacBooks with worse "build quality" even if they were more reparable.

It is not a given that being repairable results in worse build quality.


It is a given if someone could have made a superior product in the last 15 years, i.e. more repairable laptop with higher build quality, they would have.

> It is a given if someone could have made a superior product in the last 15 years, i.e. more repairable laptop with higher build quality, they would have.

Most of the PC competitors of the last 15 years have struggled to even come close to achieving similar build quality.

I'm not sure who this mythical competitor could be, who is supposed to not only match unibody aluminium MacBook build quality, but also solve repairability, and come in with a final product that is cheaper?


It kind of sounds like you are saying it is impossible to improve on the current state of the world.

That if it was possible to improve things, someone would have already done it. And they haven’t, so it must not be possible.

That feels a bit extreme… Maybe I’m misunderstanding?


No, it is certainly possible to come up with an innovation that allows progress.

But the tone I get from discussions about repairability and performance is that it would be trivial to make the device, if only businesses wanted to.

However, given the fact that it hasn’t happened yet from a variety of alternative manufacturers, the probability seems very low that the ideal device is possible with current technology at a price that is viable.

Basically, it is a competitive market (or was), and what won out was what was possible. Barring some leap in technology, it is unrealistic to assume we can do better without suffering tradeoffs.


A lot of the recent car safety features are cameras and ADAS which make it safer for pedestrians. The problem is it makes the car so expensive no one can afford to buy it or to repair it. There needs to be some standards to drive down the cost.

Do you have a source for the cameras and ADAS driving up the cost of the cars dramatically?

The €14k Dacia Sandero ships with camera-assisted emergency braking and lane assist. By the time you get up to a €24k MG 4, you get full level 2 driving. These don't seem like very high price thresholds


https://www.kbb.com/car-news/whats-making-car-repair-so-expe...

Cars have much longer lifecycles that the repurposed consumer technology in the ADAS. A camera module is cheap, but a camera model for this particular make/model/year is outrageously expensive if not unobtanium 10 years later. Theres the famous f150 story where the tail light housing with blind spot monitoring cost $5k.


>> your car can easily hurt a total stranger whereas the consequences of your choice in laptop are strictly personal.

You know that safety for pedestrians is also a very tightly regulated car safety category, right? Obviously, there's not much that can be done if you get hit by a car going 70mph, but the fact that most people should survive a 30mph impact with a modern car is mostly thanks to regulations requiring crumple zones specifically designed to protect pedestrians in a collision. And yeah, there are huge trade offs - I imagine people would generally prefer a car that doesn't need incredibly expensive repairs after a minor collision because everything at the front just crumpled, but then they would be guaranteed to cut off legs of any person hit - it's a trade off.


Not in the US. Specific pedestrian safety features are not included in cars sold there due to lack of regulation. FMVSS was planning a regulation modelled after ECE R127, then the administration changed and no progress since...

Lack of regulation resulting in worse outcomes is also a data point for regulation being able to solve problems.

Well yes, which is why most American cars are not approved for sale over here.

It would be trivial to limit a car’s speeds in residential and urban areas based on GPS, and that would dramatically decrease risk to people outside of cars.

Or mandate in car cameras that record the driver to a blackbox to determine if the driver’s negligence caused others to be damaged. Also a cheap implementation that would immediately make drivers be more attentive.


>>It would be trivial to limit a car’s speeds in residential and urban areas based on GPS, and that would dramatically decrease risk to people outside of cars.

Only partialy agree. As in - yes I agree in principle, but I don't agree it would be trivial.

My sister had insurance with a black box policy, where everything she did in the car was recorded. And on her drive to work, she would always get a threatening email saying "we've recorded you going 70mph in a 20mph zone, if this continues we will cancel your policy". We had to ring them up and demand the GPS trace, and guess what - at one point she was going on the motorway above a 20mph road, but the system probably just did "what is the speed limit at X/Y coordinates" and was getting 20mph for the nearest road. We've had to do this several times when she had the policy.

My own Volvo XC60 frequently tells me I'm going over the speed limit as it thinks the road I'm on has a 50mph limit when in fact it's 70, and in another place it thinks it's 30 when in fact it's also 70.

Not to mention that the speeds entered on Google Maps are often just wrong and take forever to update. And it's funny when people like Harry Metcalf say that every new car he tests insists that his own private drive has a 20mph limit when obviously there is none. Imagine if you couldn't turn that off!

So yeah, very easy to implement(and it's a great idea!) but in practice it's one of these "looks easy on paper, but in reality it's super hard to do reliably".


I read the headline and first thought was seriously, that's it? Surely this is one of the least concerning things about the administration

Generating big chunks of code is rarely what I want from an agent. They really shine for stuff like combing through logs or scanning dozens of source files to explain a test failure. Which benchmark covers that? I want the debugging benchmark that tests mastery of build systems, CLIs, etc.

I agree. Also good for small changes that need to be applied consistently across an entire codebase.

I recently refactored our whole app from hard deletes to soft deletes. There are obviously various ways to skin this particular cat, but the way I chose needed all our deletions updated and also needed queries updating to exclude soft deleted rows, except in specific circumstances (e.g., admins restoring accidentally deleted data).

Of course, this is not hard to do manually but is is a bloody chore and tends toward error prone. But the agent made short work of it, for which I was very grateful.


Do you not end up breaking half the value of referential integrity doing it that way (e.g. you had to update all the queries but now you have a sharp edge in that all future queries need to remember to be soft delete aware. Not a blocker for sure, just a sharp edge).

You know your system better than me for sure, a random commenter on a website :-D your comment just shocked me out of my daze enough for my brain to say "but I always move the record to another table rather than soft delete" and i felt compelled to give unsolicited and likely wrong opinion.


Yeah, I did consider moving records to shadow tables, but - because of the nature of our data - it requires moving a lot of child records as well, so it's quite a lot of additional churn in WAL, and the same for restore. And this approach has its own challenges with referential integrity.

More than that, though: lots of queries for reporting, and the like, suddenly need to use JOINs. Same for admin use cases where we want them to be able to see archived and live data in a unified view. The conclusion I came to is it doesn't really eliminate complexity for us: just moves it elsehwere.

Totally valid approach though. I'd also considered different views for live versus archived (or live+archived) data. Again, it solves some issues, but moves complexity elsewhere.

The other key point: it's a Ruby on Rails system so the moment you start doing funky stuff with separate tables or views, whilst it is doable, you lose a lot of the benefits of Active Record and end up having to do a lot more manual lifting. So, again, this sort of played against the alternatives.

As I say, not to diss other approaches: in a different situation I might have chosen one of them.

My conclusion - not for the first time - is that soft delete obviously adds some level of irreducible complexity to an application or system versus hard delete no matter how you do it. Whether or not that extra complexity is worth it very much depends on the application and your user/customer base.

For some people, just the ability to restore deleted rows from backup would be enough - and in other cases it's been enough for me - but that is always a bit of a faff so not a great fit if you're optimising for minimal support overhead and rapid turnaround of any issues that do arise.


Thanks for taking the time to write such a high quality reply; this is something I've wondered about for a long time and I appreciate the thought and detail you've shared here. :)

No worries - I'm glad it's helpful. Like anything, it's incredibly context specific, and you're always weighing up trade offs that may or may not turn out to be valid over the long term based on the best information you have right now.

I move the record to another _index_, generally.

It depends whether you reliably control all the DB client code, of course.


This, make sure the 'active' flag (or deleted_at timestamp) is part of most indexes and you're probably going to see very small impacts on reads.

It then turns into a slowly-growing problem if you never ever clean up the soft-deleted records, but just being able to gain auditability nearly immediately is usually well worth kicking the can down the road.


must be something incredibly simple you're making out more complicated than it actually is, I've never seen an LLM do these things well.

This is what gives me the warm fuzzies about the HN community: people jumping to wild conclusions about your domain and systems based on a 4 sentence comment. /s

The thing you'll start to notice is this happens A LOT on every subject.

HN tends to think of itself as smarter than the average for every topic. But it turns out there is a lot of bad and factually wrong information in every thread.


Yeah, I know, and I know I've been guilty of it myself at times. It's a trap that's too easy to fall into.

Something about aughts dev culture as well: I remember it being really common back then. Everybody had to appear to be smart by second guessing everything everyone else was doing. Exhausting.


Probably want to look at SWE bench pro or terminal bench 2. They cover these longer horizon tasks that need more than just writing a bit of code in one file. And SWE bench pro in particular it is not yet saturated like many other common benchmarks. Normal SWE and LCB are not really useful anymore because they are already being gamed hard so the developers can quote high numbers in a repo readme or press release.

Build systems are tested by CompileBench (Quesma's benchmark).

Disclaimer: I'm the founder.


Generating big chunks code is all I do, all day.

I don't write code by hand any more, neither at work, nor for side projects.

I work mostly in Rust and TypeScript at a developer tools company.


[flagged]


I have never read a snide comment on this site that i've been more repulsed by.

I think because it's so specifically sharpened to stab at the software developer, my compatriot, one of the foremost primary populations here, rather than just an overall shitty human insult -- and timed to do so when the person opens up in an honest dialogue about what they're doing.

But good news: every large software house i've talked to in the past two years is touching AI. As tragic as that is for a multitude of good reasons surrounding the workforce/copyright/ip/human-laziness/loss-of-skill/etc, that means imric is going to be outside of software , by their own rules, in totality in just a few short years!

Happy days!


Man I've been waiting for this turn around for years now! This site would lauded whatever it is he wrote for the past 3 years. And it's always been so disheartening to see it accepted just because it was anti ai. Seeing the site get with the times is wonderful

[flagged]


You only hurt yourself with that attitude. AI might take your job.

> You only hurt yourself with that attitude.

Funny, others seem more hurt by it.

> AI might take your job.

I'm not the one "grieving the loss of his career". :)


We have the quietest on-call rotation of any company I've ever worked at.

We have a high standard for code review, static verification, and tests.

The fact that the code isn't hand-rolled artisanal code, and is generated by AI now, has so far turned out to have no impact on product quality or bugs reported.


Ah, that's great, sounds like the ideal working environment.

So, which company is it again?


What are company or tools you are working?

Tbf, as long as you really know what you're doing and have the sense to avoid falling into a spaghetti code trap, generating bigger chunks of code absolutely works and should be done. The pitfall happens when

(a) the dev has no idea what the agent is doing (b) the dev gives overtly-broad instructions.

If you give it specific enough tasks (not to the point where it's writing singular functions) but a general class description, you're on a good track.


Why? Because writing code is the only measure of quality when producing tools? What about Unit and Integration Tests, UX research, and Performance tests.

I agree that for many applications the code written by an LLM can be good enough, as proven by the many commercial applications that contain even worse code.

However, anyone who uses an LLM must remain aware of the limitations of this method.

There are many features of a program that cannot be tested exhaustively and which must be guaranteed by its design. When you do not understand very well the structure of a program it may be difficult to decide what must be tested.

With performance, the confidence in what an LLM produces is even lower, because it is unlikely to know if you have really reached a performance limited by hardware. Obtaining a performance better than a previously existing program does not prove anything, because most existing programs are likely to have a performance much lower than possible.

In many cases you just want a performance good enough, not the best attainable, so you can be content with your LLM-generated program. But you must not fool yourself by believing that this is really the best that can be done.


Oh yes! I let my environments now be built by agents via kubectl / helm and let them debug issues.

It's amazing! Saves hours of work!

I create the basic helm configd settings etc and when there is a conflict or something not working I let an agent fix it!


Create it!

I've been speaking French since pre-school (albeit in North America mostly) and to me é always sounds more like the English short i (as in "tip"). I'm becoming increasingly convinced that everybody on Earth but me is wrong about it.

Do you happen to be from the western US or Canada? They tend to lower the /ɪ/ monophthong (i of tip, pit, sit, etc.) there, making it sound pretty close to /e/ (French é, German eh). It's one of those things that, combined with regionalisms and other accent features, give away where you grew up :) I noticed a lot of Londoners do this too, though this is just my experience.

Nope, Northeast. And my French teachers spoke with a Parisian accent.


They're extremely close! /ɪ/ literally sits next to /e/ on the vowel chart https://en.wikipedia.org/wiki/Vowel_diagram#/media/File%3AIP...

It makes a lot of sense. I would also try to get my customers to do work for me if I were confident they would never churn.

I will do the work for them (typically paid for by my employer) iff I can expect them to fix it.

Blackbox debugging is a PITA, which is part of why I prefer open source, but it is what it is... If something is broken, and I can get it fixed by putting in the time to get a good report, and etc and they fix the thing, then I'll do it.

But if they don't fix the stuff, I have no shortage of things to fix myself.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: