I keep reading folks saying OpenClaw has completely changed their life while posting a picture of 58 mac minis on their desk.
But every single use case I've read so far could be done with a pretty affordable SaaS product, Zapier, Automator (app on a mac that's existed for over a decade), or something simple you could make yourself.
It also feels like people are automating things that don't really need to be automated at all (do you really need to be reminded to make coffee?)
I fully realize this is probably me being a curmudgeon, however, I have yet to see someone make an actual, practical use case for it. (I would genuinely like to know one, I just haven't seen it)
So purely from a hacker perspective, I'm amused at the whining.
Like, a corporation had a weakness you could exploit to get free/cheap thing. Fair game.
Then someone shares the exploit with a bunch of script kiddies, they exploit it to the Nth degree, and the company immediately notices and shuts everyone down.
Like, my dudes, what did you think was going to happen?
You treasure these little tricks, use them cautiously, and only share them sparingly. They can last for years if you carefully fly under the radar, before they're fixed by accident when another system is changed. THEN you share tales of your exploits for fame and internet points.
And instead, you integrate your exploit into hip new thing, share it at scale, write blog posts and short form video content about it, basically launch a DDoS against the service you're exploiting, and then are shocked when the exploit gets patched and whine about your free thing getting taken away?
Managers will be starting to ask for claws in the development flow, claws for automation, etc. Another flashy trend everyone will have to endure because an influencer is hyping the tech. It happened in 2024/2025. Every manager demanding use of "vibe coding", because they bought the lie that is what everyone is doing and is the best thing since sliced bread and whatnot. Karpathy comes up with a new shit to hype, and everyone will jump on the bandwagon. It's exhausting. It's like when there was a new frontend framework every single month and everyone just following the trend. Backbone is good enough. Then Vue. Then react. Then angular. Then svelte. Then SolidJs. Then Astro. Probably now everyone and their mothers will try to come with another abstraction layer on top of llms, then on top of agents, then on top of claws. Like I said, it's exhausting and the ROI of jumping every single fucking trend is becoming really hard to see.
I'm happy for the guy, but am I jealous as well? Well yes, and that's perfectly human.
We have someone who vibe coded software with major security vulnerabilities. This is reported by many folks
We also have someone who vibecoded without reading any of the code. This is self admitted by this person.
We don't know how much of the github stars are bought. We don't know how many twitter followings/tweets are bought.
Then after a bunch of podcasts and interviews, this person gets hired by a big tech company. Would you hire someone who never read any if the code that they've developed? Well, this is what happened here.
In this timeline, I'm not sure I find anything inspiring here. It's telling me that I should rather focus on getting viral/lucky to get a shot at "success". Maybe I should network better to get "successful". I shouldn't be focusing on writing good code or good enough agents. I shouldn't write secure software, instead I should write softwares that can go viral instead. Are companies hiring for vitality or merit these days? What is even happening here?
So am I jealous, yes because this timeline makes no sense as a software engineer. But am I happy for the guy, yeah I also want to make lots of money someday.
There's an odd trend with these sorts of posts where the author claims to have had some transformative change in their workflow brought upon by LLM coding tools, but also seemingly has nothing to show for it. To me, using the most recent ChatGPT Codex (5.3 on "Extra High" reasoning), it's incredibly obvious that while these tools are surprisingly good at doing repetitive or locally-scoped tasks, they immediately fall apart when faced with the types of things that are actually difficult in software development and require non-trivial amounts of guidance and hand-holding to get things right. This can still be useful, but is a far cry from what seems to be the online discourse right now.
As a real world example, I was told to evaluate Claude Code and ChatGPT codex at my current job since my boss had heard about them and wanted to know what it would mean for our operations. Our main environment is a C# and Typescript monorepo with 2 products being developed, and even with a pretty extensive test suite and a nearly 100 line "AGENTS.md" file, all models I tried basically fail or try to shortcut nearly every task I give it, even when using "plan mode" to give it time to come up with a plan before starting. To be fair, I was able to get it to work pretty well after giving it extremely detailed instructions and monitoring the "thinking" output and stopping it when I see something wrong there to correct it, but at that point I felt silly for spending all that effort just driving the bot instead of doing it myself.
It almost feels like this is some "open secret" which we're all pretending isn't the case too, since if it were really as good as a lot of people are saying there should be a massive increase in the number of high quality projects/products being developed. I don't mean to sound dismissive, but I really do feel like I'm going crazy here.
You need to take every comment about AI and mentally put a little bracketed note beside each one noting technical competence.
AI is basically an software development eternal september: it is by definition allowing a bunch of people who are not competent enough to build software without AI to build it. This is, in many ways, a good thing!
The bad thing is that there are a lot of comments and hype that superficially sound like they are coming from your experienced peers being turned to the light, but are actually from people who are not historically your peers, who are now coming into your spaces with enthusiasm for how they got here.
Like on the topic of this article[0], it would be deranged for Apple (or any company with a registered entity that could be sued) to ship an OpenClaw equivalent. It is, and forever will be[1] a massive footgun that you would not want to be legally responsible for people using safely. Apple especially: a company who proudly cares about your privacy and data safety? Anyone with the kind of technical knowledge you'd expect around HN would know that them moving first on this would be bonkers.
But here we are :-)
[0] OP's article is written by someone who wrote code for a few years nearly 20 years ago.
Just because a bunch of people tell you the practice of performing the art form of producing software via handwriting code is over doesn't mean it's over. This form of hyperbole is intended to overwhelm your reason, get you to forget your own expertise, and trick you into engaging with the topic in a fearful manner (literally FOMO). Don't fall for this cheap stunt.
Disclosure, I've not run a website since my health issues began, however, Cloudflare has an AI firewall, Cloudflare is super cheap (also: unsure if the AI firewall is on the free tier, however I would be surprised if it is not). Ignoring the recent drama about a couple incidents they've had (because this would not matter for a personal blog), why not use this instead?
Just curious. Hoping to be able to work on a website again someday, if I ever regain my health/stamina/etc back.
I treat apple ID and google ID like throwaway accounts. I would never trust anything valuable to either. The problem is that it is very hard for "usual people" to do that.
I will also never have an electronic ID. We (Switzerland) were dumb enough to vote yes for it but we are giving away our freedoms eventually.
We need regulations to ensure vendor cannot lock in users and cannot threaten them. Everything should work like if you have your own domain and use email. If your provider go nuts, move your hosting and change your MX and point your local copy to it.
This should not be reserved to some nerd like me, it should be an universal right.
It is already late, but it can be reversed. We need for more sotires like this one to errupt, so people understand.
For me vibecoding has a similar feeling to a big bag of Doritos. It's really fun at first to slap down 10k lines of code in an afternoon knowing this is just an indulgence. I think AI is actually really useful for getting a quick view of some library or feature. Also, you can learn a lot if you approach it the right way. However, every time I do any amount of vibecoding eventually it just transitions into pure lethargy mode; (apparently lethargia is not a word, by the way). Once you eat half a bag of Doritos, are you really not going to eat the second half... do you really want to eat the second half? I don't feel like I'm benefitting as a human just being a QA tester for the AI, constantly shouting that X thing didn't work and Y thing needs to be changed slightly. I think pure vibecode AI use has a difficult to understand efficiency curve, where it's obviously very efficient in the beginning, but over time hard things start to compound such that if you didn't actually form a good understanding of the project, you won't be able to make progress after a while. At that point you ate the whole bag of Doritos, you feel like shit, and you can't get off the couch.
This article is not about vibe coding per se, it's about not having strong boundaries between you as the developer, and your client. You should not be allowing the client to dictate how you work, much less them having the permissions to merge in code. This was true before AI too, where clients might say, do X this way, and you should simply say no, because they are paying for your expertise*. It's like hiring a plumber then trying to tell them how to fix the toilet.
*as an aside, this reminds me of the classic joke where the client asks for the price list for a developer's services:
I do it: $500
I do it, but you watch: $750
I do it, and you help: $1,000
You do it yourself: $5,000
You start it, and you want me to finish it: $10,000
Don't worry that much about 'AI' specifically. LLMs are an impressive piece of technology, but at the end of the day they're just language predictors - and bad ones a lot of the time. They can reassemble and remix what's already been written but with no understanding of it.
It can be an accelerator - it gets extremely common boiler-plate text work out of the way. But it can't replace any job that requires a functioning brain, since LLMs do not have one - nor ever will.
But in the end it doesn't matter. Companies do whatever they can to slash their labor requirements, pay people less, dodge regulations, etc. If not 'AI' it'll just be something else.
It becomes obsolete in literally weeks, and it also doesn't work 80% of the time. Like why write a mcp server for custom tasks when I don't know if the llm is going to reliably call it.
My rule for AI has been steadfast for months (years?) now. I write (myself, not AI because then I spend more time guiding the AI instead of thinking about the problem) documentation for myself (templates, checklist, etc.). I give ai a chance to one-shot it in seconds, if it can't, I am either review my documentation or I just do it manually.
It took me so long to realise this is what's important in enterprise. Uptime isn't important, being able to blame someone else is what's important.
If you're down for 5 minutes a year because one of your employees broke something, that's your fault, and the blame passes down through the CTO.
If you're down for 5 hours a year but this affected other companies too, it's not your fault
From AWS to Crowdstrike - system resilience and uptime isn't the goal. Risk mitigation isn't the goal. Affordability isn't the goal.
When the CEO's buddies all suffer at the same time as he does, it's just an "act of god" and nothing can be done, it's such a complex outcome that even the amazing boffins at aws/google/microsoft/cloudflare/etc can't cope.
If the CEO is down at a different time than the CEO's buddies then it's that Dave/Charlie/Bertie/Alice can't cope and it's the CTO's fault for not outsourcing it.
As someone who likes to see things working, it pisses me off no end, but it's the way of the world, and likely has been whenever the owner and CTO are separate.
I'm actually in the middle of a complete redesign of the AI layer, but there is a POC video linked from the GitHub README that demonstrates the interaction I'm going for using an earlier version. The POS is a very bare-bones system where the "kernel," as it were, is implemented in Rust. There's an MCP atop that to allow the AI and UI layers to drive the POS. Stores may be implemented as extensions that plug into the POS kernel, and that's where language, currency, item databases, and such are defined. The AI cashier knows what items are for sale, how to modify items (in a restaurant context), how to translate from other languages, how to interpret what the customer actually wants, and seamlessly lead the customer through a transaction.
The current code is quite ugly and full of a lot of unfortunate hacks, but it was a good education. The new design puts the AI much more in charge, without as much code-level orchestration. I'm applying a lot of my knowledge from the retail POS and self-service checkout domains to this, as well as learning a lot about applying AI to a "legacy" software domain.
Why do we promote articles like this that have nice graphs and are well written, when they should get a grade 'F' as an actual benchmark study. The way it is presented, a casual reader would think Postgres is 2/3rds the performance of Redis. Good god. He even admits Postgres maxxed out its 2 cores, but Redis was bottlenecked by the HTTP server. We need more of an academic, not a hacker, culture for benchmarks.
My pulse today is just a mediocre rehash of prior conversations I’ve had on the platform.
I tried to ask GPT-5 pro the other day to just pick an ambitious project it wanted to work on, and I’d carry out whatever physical world tasks it needed me to, and all it did was just come up with project plans which were rehashes of my prior projects framed as its own.
I’m rapidly losing interest in all of these tools. It feels like blockchain again in a lot of weird ways. Both will stick around, but fall well short of the tulip mania VCs and tech leaders have pushed.
I’ve long contended that tech has lost any soulful vision of the future, it’s just tactical money making all the way down.
I pay for Kagi for search, my family uses Kagi.
I pay for NextDNS to block ads, all of my family's devices use NextDNS.
I pay for credits on OpenRouter and host an OpenWebUI instance, all of my family's AI is private.
I pay for the news - The Economist, the WSJ, FT, NewScientist, etc. Lies are free, the truth is behind a paywall.
The only thing money can't buy, yet, is a phone network free of robocalls.
If you search back HN history to the beginnings of AI coding in 2021 you will find people observing that AI is bad for juniors because they can't distinguish between good and bad completions. There is no surprise, it's always been this way.
Wow this is dangerous. I wonder how many people are going to turn this on without understanding the full scope of the risks it opens them up to.
It comes with plenty of warnings, but we all know how much attention people pay to those. I'm confident that the majority of people messing around with things like MCP still don't fully understand how prompt injection attacks work and why they are such a significant threat.
Hi, said person who clicked on the link here. Been wanting to post something akin to this and was going to save it for the post mortem but I wanted to address the increase in these sort of very shout-ey comments directed toward me.
> What does that even mean? That's not something that can be updated - that's kind of the point of 2FA.
I didn't sit and read and parse the whole thing. That was mistake one. I have stated elsewhere, I was stressed and in a rush, and was trying to knock things off my list.
Also, 2FA can of course be updated. npm has had some shifts in how it approaches security over the years, and having worked within that ecosystem for the better part of 10-15 years, this didn't strike me as particularly unheard of on their part. This, especially after the various acquisitions they've had.
It's no excuse, just a contributing factor.
> It would be very unusual to write like that in a formal security notification.
On the contrary, I'd say this is pretty par for the course in corpo-speak. When "kindly" is used incorrectly, that's when it's a red flag for me.
> What does "temporarily locked" mean? That's not a thing. Also creating a sense of urgency is a classic phishing technique and a red flag.
Yes, of course it is. I'm well aware of that. Again, this email reached me at the absolute worst time it could have and I made a very human error.
"Temporarily locked" surprises me that it surprises you. My account was, in fact, temporarily locked while I was trying to regain access to it. Even npm had to manually force a password reset from their end.
> Any nonstandard domain is a red flag.
When I contacted npm, support responded from githubsupport.com. When I pay my TV tax here in Germany (a governmental thing), it goes to a completely bizarre, random third party site that took me ages to vet.
There's no such thing as a "standard" domain anymore with gTLDs, and while I should have vetted this particular one, it didn't stand out as something impossible. In my head, it was their new help support site - just like github.community exists.
Again - and I guess I have to repeat this until I'm blue in the face - this is not an excuse. Just reasons that contributed to my mistake.
> NEVER EVER EVER click links in any kind of security alert email.
I'm aware. I've taught this as the typical security person at my respective companies. I've embodied it, followed it closely for years, etc. I slipped up, and I think I've been more than transparent about that fact.
I didn't ask for my packages to be downloaded 2.6 billion times per week when I wrote most of these 10 years ago or inherited them more than five ago. You can argue - rightfully - about my technical failure here of using an outdated form of 2FA. That's on me, and would have protected against this, but to say this doesn't happen to security-savvy individuals is the wrong message here (see: Troy Hunt getting phished).
Shit happens. It just happened to happen to me, and I happen to have undue control over some stuff that's found its way into most of the javascript world.
The security lessons and advice are all very sound - I'm glad people are talking about them - but the point I'm trying to make is, that I am a security aware/trained person, I am hyper-vigilant, and I am still a human that made a series of small or lazy mistakes that turned into one huge mistake.
Thank you for your input, however. I do appreciate that people continue to talk about the security of it all.
But every single use case I've read so far could be done with a pretty affordable SaaS product, Zapier, Automator (app on a mac that's existed for over a decade), or something simple you could make yourself.
It also feels like people are automating things that don't really need to be automated at all (do you really need to be reminded to make coffee?)
I fully realize this is probably me being a curmudgeon, however, I have yet to see someone make an actual, practical use case for it. (I would genuinely like to know one, I just haven't seen it)