I worry that the eventual result of AI research will be human extinction intentionally caused by an AI.
The human creators of the AI will probably not have intended to extinct the humans: they will probably only have been overconfident in their ability to prevent it from doing undesirable things.
The AI's motive for killing the humans will probably be its perceiving (correctly) that the humans are a "danger" to the successful completion of whatever task the AI has been set to perform. In other words, the AI will perceive (correctly) that it can achieve a higher task score if the humans were out of the way.
My conclusion from my reading the literature on how to control an AI once it becomes very smart is that the science of how to exert this control is in its infancy and won't be ready for decades, but of course unless AI research is paused worldwide for a few decades, the human race doesn't have decades.
By "exert control" I basically mean designing the AI so that it cares about what happens to the humans or about what the humans might want or prefer.
The human creators of the AI will probably not have intended to extinct the humans: they will probably only have been overconfident in their ability to prevent it from doing undesirable things.
The AI's motive for killing the humans will probably be its perceiving (correctly) that the humans are a "danger" to the successful completion of whatever task the AI has been set to perform. In other words, the AI will perceive (correctly) that it can achieve a higher task score if the humans were out of the way.
My conclusion from my reading the literature on how to control an AI once it becomes very smart is that the science of how to exert this control is in its infancy and won't be ready for decades, but of course unless AI research is paused worldwide for a few decades, the human race doesn't have decades.
By "exert control" I basically mean designing the AI so that it cares about what happens to the humans or about what the humans might want or prefer.