Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This seems like the obvious outcome, considering all the hype. The more powerful the AI, the more power it has to break stuff. And there is literally ZERO possibility to remove that risk. So, whos going to tell your gungho CEO that the fancy features he wants are straight up impossible, without a giant security risk?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: