Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think the people critical of OpenClaw are not addressing the reason(s) people are trying to use it.

While I don't particularly care for this bot's (Rathburn) goals, people are trying to use OpenClaw for all kinds of personal/productivity benefits. Have a bunch of smallish projects that you don't have time for? Go set up OpenClaw and just have the AI work on them for a week or two - sending you daily updates on progress.

If you're the type who likes LLM coding because it now enables you to do lots of projects you've had in your mind for years, you're also likely the sort of person who'll like OpenClaw.

Forget bots messing with Github and posting to social media.

Yes, it's very dangerous.

But do you have a "safe" alternative that one can set up quickly, and can have a non-technical user use it?

Until that alternative surfaces, people will continue to use it. I don't blame them.



If OpenClaw users cause negative externalities to others, as they did here, they ought to be deterred with commensurate severity.


> If you're the type who likes LLM coding because it now enables you to do lots of projects you've had in your mind for years, you're also likely the sort of person who'll like OpenClaw.

I'm definitely the former, but I just can't see a compelling use for the latter. Besides manage my calendar or automatically responding to my emails, what does OpenClaw get me that claude code doesn't? The premise appeals to me on an aesthetic level, OpenClaw is certainly provocative, but I don't see myself using it.


I'll admit I'm not up to speed on Claude Code, but can you get it to look at a company's job openings each day, and notify you whenever there's an opening in your town.

All without writing a single line of code or setting up a cron job manually?

I suppose it could, if you let it execute the crontab commands. But 2 months after you've set it up, can you launch claude code and just say "Hey, stop the job search notifications" and have it know what you're talking about?

This is a trivial example. People are (attempting to) use it for more significant/compex stuff.


Yes. I have four devices fully managed by Claude Code at this point. My NAS and Desktop running NixOS. My MBP with nix-darwin and an M2 MBA with a dead screen that I've turned into a headless server also using nix-darwin. I've got a common flake theme and modularized setup. I've got health checks (all written by CC) coming in from all of them aggregated by another script (written by CC) which will send me alerts over various pipelines including to my Matrix server (setup and maintained by CC). I can do things like ask CC to setup radarr and make it available on my internal network and it knows where and how I host containers. It can even look at other *arr tools and pull usenet details from them to use in radarr. What my NAS is for. How to add a service in a "standards" compliant way for my setup. I can ask it to make a service available on the internet and it will configure a cloudflared tunnel exposing it. Including knowing when to make changes on my local dnsmasq instances or the Cloud Flare global DNS for external access.

I think the difference is that all of my scheduled tasks and alerting capabilities are all just normal scripts. They don't depend on CC to exist. CC could disappear tomorrow and all of my setup and config would still be valid and useful and continue to work. CC isn't a critical path for any normal operations of the system. I have explicitly instructed CC to create and use these scripts so it's not something you get "for free" but something you can architect towards. If I wanted to look at a companies job postings each day and provide alerts to me, I'd have CC build a script to scrape and process results and schedule it. At that point CC is outside of the loop and I have a repeatable pattern to use until the website changes significantly enough to justify updating it. But I could ask that CC context to stop the job search service months later and it would know or be able to find what I'm referring to.

I'm open to using more autonomous tools like OpenClaw, but I'm very resistant to building them into critical workflows. I'd happily work with their output, but I don't want them to be a core part of the normal input/output operations of the day to day running of my systems. My using AI to make changes to my system is fine. My system needing AI to run day to day is not.


I've heard people do similar stuff in CC. Do you know of any writeups on some of this (CC or OpenCode)?


But aren't you ignoring that the headline might be simply critical of the very idea of autonomous agents with access to personal accounts etc?

I haven't even read the article, but just because we can, it doesn't mean we should (give autonomous AI agents based on LLMs in the cloud access to personal credentials)?


You don't need to give OpenClaw access to personal stuff. Yes, people are letting it read email. Risky, but I understand. But lots of others are just using it to build stuff. No need to give it access to your personal information.

Say you want a bot to go through all the HN front page stories, and summarize each one as a paragraph, and message you with that once a day during lunch time.

And you don't want to write a single line of code. You just tell the AI to set it all up.

No personal information leaked.


Yep, I’m in this camp. My OC instance runs on an old MacBook with no access to my personal accounts, except my “family appointments” calendar and an API key I created for it for a service I self-host. I interact with a Discord bot to chat with it, and it does some things on schedules and other things when asked.

It’s a great tool if you can think of things you regularly want someone/thing else to do for you.


I have a somewhat similar use case. I do want it to go through my insta feed, specifically one account that breaks down statistical models in their reels, summarize the concepts and dump it to my Obsidian.


The article addresses the reason(s) people are trying to use it at great length, coming to many of the same conclusions as you. The author (and I) just don't agree with your directive to "Forget bots messing with Github and posting to social media." Why should we forget that?


The article doesn't really list any cool things people are using it for.

> "Forget bots messing with Github and posting to social media." Why should we forget that?

Go back 20 years, and if HN existed in those days, it will be full of "Forget that peer to peer is used for piracy. Focus on the positive uses."

The web, and pretty much every communication channel in existence magnifies a lot of illegal activity (child abuse, etc). Should we singularly focus on those?


We shouldn't singularly focus on those, but it's unreasonable to respond to a post about the dangers of a product by telling the author that the product is very popular so it's best to forget the dangers. 2006-era hackers affirmatively argued that the dangers of piracy are overblown, often going so far as to say that piracy is perfectly ethical and it's media companies' fault for making their content so hard to access.


> but it's unreasonable to respond to a post about the dangers of a product by telling the author that the product is very popular so it's best to forget the dangers.

And who is doing that?


[flagged]


Account created a few minutes ago.

Incorrectly quotes me and executes a strawman attack.


This is like "I like lighting off fireworks at the gas station because its fun, do you have a "safe" alternative?".


Don't conflate "fun" with "useful".

This is more like driving a car with little safety in the early days. Unsafe? For sure. People still did it. (Or electric bikes these days).

Or the early days of the web where almost no site had security. People still entered their CC number to buy stuff.


It's like driving a car today. It's the most dangerous thing I do, both for myself, and those around me.

The external consequences of driving are horrific. We just don't care.


That's a total mischaracterization. OP is saying there are no safer fireworks, so some damage will be done, but until someone develops safer and better fireworks, people will continue to use the existing ones


Or we will ban OpenClaw, as many jurisdictions ban fireworks, and start filing CFAA cases against people whose moltbots misbehave. I'm not happy about that option, I remember Aaron Swartz, but it's not acceptable for an industry to declare that they provide a useful service so they're not going to self-regulate.


My perspective is all AI needs to have way more legal controls around use and accountability, so I’m not particularly sympathetic to “rapidly growing new public ill is unsafe, but there’s no safer option”


Please just let us name the enforcement agents Turing Police.


I mean, yeah, if you specifically like lighting off fireworks at the gas station, you should buy your own gas station, make sure it's far away from any other structures, ensure that the gas tanks and lines are completely empty, and then do whatever pyromanic stuff you feel like safely.

Same thing with OpenClaw. Install it on its own machine, put it on its own network, don't give it access to your actual identity or anything sensitive, and be careful not to let it do things that would harm you or others. Other than that, have fun playing with the agent and let it do things for you.

It's not a nuke. It can be contained. You don't have to trust it or give it access to anything you aren't comfortable being public.


There's absolutely no way to contain people who want to use this for misdeeds. They are just getting starting now and will make the web utter fucking hell if they are allowed to continue.


> There's absolutely no way to contain people who want to use this for misdeeds.

There is no practical way to stop someone from going to a crowded mall during Christmas shopping season and mowing people down with a machine gun. Yet, we still haven't made malls illegal.

> ... if they are allowed to continue.

You may have a fantastic new idea on how we can create a worldwide ban on such a thing. If so, please share it with the rest of us.


If you can come up with a technical and legal approach that contains the misdeeds, but doesn't compromise the positive uses, I'm with you. I just don't see it happening. The most you can do is go after operators if it misbehaves.

I've been around since before the web. You know what made the Internet suck for me? Letting people act anonymously. Especially in forums. Pre-web, I was part of a local network of BBS's, and the best thing about it was anonymity was simply forbidden. Each BBS operator in the network verified the identity of the user. They had to post in their own names or be banned. We had moderators, but the lack of anonymity really ensured people behaved. Acting poorly didn't just affect your access to one BBS, but access to the whole network.

Bots spreading crap on the web? It's merely an increment over the problem of allowing anonymous users. You can't solve one while maintaining anonymity.


I don't care about the "positive" uses. Whatever convenience they grant is more than tarnished by skill and thought degeneration, lack of control and agency, etc. We've spent two decades learning about all the negative cognitive effects of social media, LLMs are speed running further brain damage. I know two people who've been treated for AI psychosis. Enough.


Again, I'm not disagreeing with the harm.

But I think drawing the line of banning AI bots is highly convenient. If you want to solve the problem, disallow anonymity.

Of course, there are (very few) positive use cases for online anonymity, but to quote you: "I don't care about the positive uses." The damage it did is significantly greater than the positives.

At least with LLMs (as a whole, not as bots), the positives likely outnumber the negatives significantly. That cannot be said about online anonymity.


Okay, but what are you actually proposing? This genie isn't going back in the bottle.


At a minimum, every single who has been slandered, bullied, blackmailed, tricked, has suffered psychological damage, etc. as a result of a bot or chat interface should be entitled to damages from the company authoring the model. These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this. If they can't do this, the penalties must be severe.

There are many ways to put the externalities back on model providers, this is just the kernel of a suggestion for a path forward, but all the people pretending like this is impossible are just wrong.


> should be entitled to damages from the company authoring the model.

1. How will you know it's a bot?

2. How will you know the model?

Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

> These should be processed extremely quickly, without a court appearance by any of the parties, as the problem is so blatantly obvious and widespread there's no reason to tie up the courts with this garbage or force claimants to seek representation.

Ouch. Throw due process out the door!

> Bots must advertise their model provider to every person they interact with, and platforms must restrict bots that do not or cannot abide by this.

This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.


> 1. How will you know it's a bot? > 2. How will you know the model?

Sounds like a problem for the platforms and model vendors to figure out!

> Do you want to target the model authors or the LLM providers? If company X is serving an LLM created by academic researchers at University Y, will you go after Y or X? Or both?

I mean providers are obviously my primary concern as the people selling something to the public, but sure, why not both.

> Ouch. Throw due process out the door!

There's lots of prior art for this, let's not pretend like this is something new. The NLRB adjudicates labor complaints and disputes, the DoT adjudicates complaints about airlines, etc.

> This is more reasonable, but for the fact that the bots can simply state the wrong model, or change it daily.

Once again, sounds like a problem for the platforms to figure out! How do they handle spammers and abusers today? Throw up their hands? Guess they won't be able to do that for long!

> Unfortunately, the simple reason your proposal will fail is that if country X does it, they'll be left far behind country Y that doesn't. It's national suicide to regulate in this fashion.

Sounds like a diplomatic problem, if it actually is a problem. In reality the social harms of AI may exceed any supposed benefits. The optimistic case seems to be that AI becomes so powerful it causes a massive hemorrhaging of jobs in knowledge work (and later other forms of work). Still waiting to see any social benefits!


> Sounds like a problem for the platforms and model vendors to figure out!

> sounds like a problem for the platforms to figure out!

You'd have to fundamentally change how the Internet works to be able to figure these things out. To achieve this, you'd need cooperation from everybody, not just LLM providers.


> I don't care about the "positive" uses.

You should have stopped there.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: