Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes. Ten years ago I would say there was a consensus in the ML community that if we got really powerful AI, it should be kept isolated in controlled environments (no internet, no way to execute code) until it could be trusted/verified. Fast forward: openclaw. People don’t seem to care, why should the labs?
 help



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: