Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Part 2 is when the AI reaches the point where it's smarter than it creators, then starts improving its own code and bootstraps its way to superintelligent. This idea is referred to as "the intelligence explosion" https://wiki.lesswrong.com/wiki/Intelligence_explosion

>If a paperclip AI is so dedicated to the order to produce paperclips, why wouldn't it be just as dedicated to any other order? Like "don't throw me in that incinerator!"

The paperclipper scenario is meant to indicate that even a goal which seems benign could have extremely bad implications if pursued by a superintelligence.

People concerned with AI risk typically argue that of the universe of possible goals that could be given to an AI, the vast majority of goals in that universe are functionally equivalent to papperclipping. For example, an AI could be programmed to maximize the number of happy people, but without a sufficiently precise specification of what "happy people" means, this could result in something like manufacturing lots of tiny smiley faces. An AI given that order could avoid throwing you in an incinerator and instead throw you in to the thing that's closest to being an incinerator without technically qualifying as an incinerator. Etc.



I think you're just asserting that part 2 exists. What matters is how an optimizing machine bootstraps super-intelligence, because the machine you fear in part 3 has a very specific peculiarity: it's smart enough to be dangerous to humans, but so dumb that it will follow a simple instruction like "make paperclips" without any independent judgment as to whether it should, or the implications of how it does so.

Udik highlighted this contradiction more more succinctly that I have been able to:

https://news.ycombinator.com/item?id=11290740

If we stipulate the existence of such a machine, we can then discuss how it might be scary. But we can stipulate the existence of many things that are scary--doesn't mean they will ever actually exist.

Strilanc above made the analogy between a scary AI and the Monkey's Paw. This is instructive: the Monkey's Paw does not actually exist, and by the physical laws of the universe as we know them, cannot exist.

I think the analogy actually goes the other way. The paperclip AI is itself just an allegory, a modern fairytale analogous to the Monkey's Paw.





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: