Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I like the Paperclip Maximizer thought experiment to illustrate this:

https://wiki.lesswrong.com/wiki/Paperclip_maximizer

Short version: imagine you own a paperclip factory and you install a superhuman AI and tell it to maximize the number of paperclips it produces. Given that goal, it will eventually attempt to convert all matter in the universe into paperclips. Since some of that matter consists of humans and the things humans care about, this will inevitably lead to conflict.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: