You do not write Rust yet will make blanket absurd claims about Rust being overcomplicated or not solving your problems. The additional comparison of Rust to PHP is also ridiculous for so many reasons it might not be worth even discussing, because it just seems bad faith from the start.
You're missing a very fundamental point here, and this is usually something I find with long time C programmers. C is a great, but old language, with decades of overhead on context, tooling, and lib understanding that make getting in to writing C substantially more difficult than say, Rust. We are still humans, and Rust isn't entirely trying to solve C from a fundamental paradigm of handling memory from a technical point of view, but more from a programmer one. Your comment about solving memory bugs is predicated on perfect human usage of C to not write the bug in the first place, which is precisely one of the many problems Rust is looking to solve.
Is this another AI article? What is said about Rust here has been said over and over again, and this brings nothing new to the table. They also always seem to be writing from a place of ignorance. If you're writing "high level Rust" the use of clone or Arc or whatever is negligible. If you're writing an HTTP service, your clone will be so fast it will make literally zero difference in the scope of your IO bound service.
Another observation is developer experience. Again, have you written _any_ Rust? I would argue that the reason Rust is such a joy to use is that the compile time errors are amazing, the way docs are handled are amazing, and so on. I know the eco system for something like Typescript is worlds better, but have you ever tried to _really_ understand what shenanigans are going on behind the scenes on some random hook from say, React? Their docs are so dismal and sparse and make assumptions on what they think you should know. Rust? You go to the doc gen, read what they think you should know as an overview, and if it isn't enough, I can click on "source" to dive deeper. What more could I possibly want?
Perhaps I'm just triggered. The discussion on Rust always seems to be fad or hype driven and almost always have very little to do with how it is to actually use it. If I have to read about compile times, again, I'm going to scream. Of all the languages I've used, it is one of the best in terms of a mature project needing new features or refactors. This is mentioned in some articles, but mostly we hear about "velocity" from some one dimensional perspective, usually someone from the web space, where it is arguable if the person should even bother trying to use Rust in the first place.
Apologies for the rant, but at this point I think Rust as a buzz word for article engagement has become tiring because it seems clear to me that these people aren't _actually_ interested in Rust at all. Who gives a shit what someone on Reddit thinks of your use of clone??
What triggered me was the proposed Arc<dyn TRAIT> as a quick fix. I was highlevel rustin along the learning curve as the article describes until i stumbled uppon dyn-compatible types and object safety.
It is too easy to trap yourself in by sprinkling in Sized and derive(Hash) ontop of your neat little type hierarchy and then realize, that this tight abstraction is tainted and you cant use dyn anymore. As a follow up trigger: This was when i learned derive macros :)
Of total users 5% is a substantial number of consumers and some would argue a non-trivial amount of market share to ignore when making a product.
This also goes without saying that the more adoption we see, the better these alternatives will get as we see consumers and businesses view Linux as worth the investment.
Exactly, 5% isn't much, but it's enough to compel developers to make sure their game runs well through Proton, which is all is necessary these days. Ports aren't really worth it, especially if they aren't going to be as well maintained in the long (cough, Valve, cough).
Even worse, the AI will supply a mediocre version of the source specific to someone else's case, and not getting anything in return, ultimately choking the open source effort. The article touches on this briefly.
All I post anymore is anti-AI sentiment because it just feels like we're in a cycle of blind trust. A lot of FOSS seems cautious about LLMs for a plethora of reasons (quality and ethics among those) but we're a long way from making the tools that are supposedly going to replace us a locally runnable tool. So, until then, we're conceding pur agency to Anthropic and whoever else.
Meanwhile, war is breaking out and disrupting already stressed supply chains and manufacturing (for instance, Taiwan relies heavily on natural gas). Many manufacturers are starting to ditch production of consumer hardware, the supposed hardware folks ITT want to run their local models on. The vast majority of datacenters aren't being built yet, and those that are being built are missing their targets, still have aging GPUs in boxes without the infrastructure to power and turn them on, all while floating hundreds of billions in debt.
Surely I can't be the only one who sees the issues here? Each topic is hours of "what ifs" and a massive gamble to see if any of it will come together in a way that will be good for anyone who visits HN.
I see a lot less thinking as a result of using LLMs as they are today and I don't see the providers building tools to promote a better way to use them. They are still way too sycophantic.
Yeah, it is wild seeing with my eyes how bad these tools are in a lot of cases. We do have some vibe coders on our team but they basically are banned from my current project because they completely ruin the design and nuke throughput. HN would have me believe I'm a Luddite who shouldn't be writing code, however. I truly do not understand how to reconcile this experience and many times it is too complicated a topic to explain to someone who isn't an engineer. AI is the uiltmate Dunning-Kruger machine. You cannot fix what you do not know because you do not know that you did not know.
As you say, I think things are just going to fall apart and we're just going to have to learn the hard way.
No, these tools are really great in a lot of cases. But they still don't have general intelligence or true understanding of anything - so if people using them wrong and rely on their output because it looks good and not because they verified it, then this is on the people using them.
I mean, that is fine, but then it seems like people at large are not using them "right". I think you'll find that since these tools are convenient and produce a lot of code in terms of lines, that verifying goes out the window. Due diligence was hard before these tools existed.
Oh I do find it certainly tempting to get lazy with these tools, but I did learn that there are sideprojects, where vibecoding is fine - and important codebase, that can be improved with LLM's - but not if you just let agents loose on them.
I feel like a crazy person, especially when I read HN. Half or more of the comments on this thread are saying how the game is over for even writing code. Then at my job, I see people break things at a rate I can't personally keep up with. Worse, I hear more and more colleagues talk about mandated AI tooling usage and massive regression rates. My company isn't there yet, but I feel it is around the corner.
You're missing a very fundamental point here, and this is usually something I find with long time C programmers. C is a great, but old language, with decades of overhead on context, tooling, and lib understanding that make getting in to writing C substantially more difficult than say, Rust. We are still humans, and Rust isn't entirely trying to solve C from a fundamental paradigm of handling memory from a technical point of view, but more from a programmer one. Your comment about solving memory bugs is predicated on perfect human usage of C to not write the bug in the first place, which is precisely one of the many problems Rust is looking to solve.
reply