Thanks for the comment and links. I had only ever read
'science fiction' like Ian Banks and Charles Stross Accelerando, but never any of the 'non-fiction'. The linked articles were interesting.
I understand that the risk of an AI that wants the mass of the entire solar system as its own and quickly becomes a matrioska brain. However, I question the idea that we aren't already there. I would argue that an augmented billionaire or better yet, a market economy compares to someone in the poorest 5% of the world just like a modern human compares to a wolf pack. A billionaire can decide not just to go to space, but to build an industry out of it. The poor can't find enough food. That is a huge difference. Is there any research into quantifying these types of differences?
Sure a human level computer AI gets 'free' speed doubling every 18 months, but so does the intelligence that surrounds an augmented human. Just look at how much more intelligence we have available to us now as compared to 20 years ago.
I agree that there will be/is an intelligence explosion. My point is simply that we are already in the midst of it, or that there is no single instant to point at and say 'that is when the singularity started.'
I think that this is an important point to make because it changes the framing of the question from "How can we survive abstract superintelligence explosion of the unknown future" to "How can we survive the existing intelligence explosion." From "How can we teach the superintelligence we are bound to create to be nice" to "How can we convince the existing superintelligence to be nice" and/or "what changes in our social/governmental/memetic structure should we change to survive the explosion we are experiencing?"
Also, if we view ourselves as already being in the intelligence explosion we can look at how existing superintelligences treat other less intelligent beings to see where our culture is likely to head as the explosion continues. If we don't like how superintelligences treat lessintelligences now, then maybe we should figure out why and how to change it.
The framing that the article provides sounds about as silly as a pack of wolves discussing tactics they will use to make sure their new human creations focus all of their energy on catching rabbits; so I tried to come at it from an angle that has a hint of pragmatism and practicality.
"Sure a human level computer AI gets 'free' speed doubling every 18 months, but so does the intelligence that surrounds an augmented human. Just look at how much more intelligence we have available to us now as compared to 20 years ago."
A motivated human-level AI could get free speed doubling a lot more quickly than every 18 months -- it could acquire more resources that already exist, it could increase the speed of progress in hardware improvements, and, perhaps most importantly, it might be able to improve itself to achieve a qualitatively superhuman intelligence rather than just quantitatively. Sections 3: "Underestimating the power of intelligence" and 7: "Rates of intelligence increase" of the Yudkowsky paper I linked before, "AI as a Negative and Positive Factor in Global Risk", address this well. http://singinst.org/upload/artificial-intelligence-risk.pdf
I think it's a mistake to compare an AI superintelligence to anything on earth currently, like market forces. An AI superintelligence could probably improve itself a lot faster than any market could.
I understand that the risk of an AI that wants the mass of the entire solar system as its own and quickly becomes a matrioska brain. However, I question the idea that we aren't already there. I would argue that an augmented billionaire or better yet, a market economy compares to someone in the poorest 5% of the world just like a modern human compares to a wolf pack. A billionaire can decide not just to go to space, but to build an industry out of it. The poor can't find enough food. That is a huge difference. Is there any research into quantifying these types of differences?
Sure a human level computer AI gets 'free' speed doubling every 18 months, but so does the intelligence that surrounds an augmented human. Just look at how much more intelligence we have available to us now as compared to 20 years ago.
I agree that there will be/is an intelligence explosion. My point is simply that we are already in the midst of it, or that there is no single instant to point at and say 'that is when the singularity started.'
I think that this is an important point to make because it changes the framing of the question from "How can we survive abstract superintelligence explosion of the unknown future" to "How can we survive the existing intelligence explosion." From "How can we teach the superintelligence we are bound to create to be nice" to "How can we convince the existing superintelligence to be nice" and/or "what changes in our social/governmental/memetic structure should we change to survive the explosion we are experiencing?"
Also, if we view ourselves as already being in the intelligence explosion we can look at how existing superintelligences treat other less intelligent beings to see where our culture is likely to head as the explosion continues. If we don't like how superintelligences treat lessintelligences now, then maybe we should figure out why and how to change it.
The framing that the article provides sounds about as silly as a pack of wolves discussing tactics they will use to make sure their new human creations focus all of their energy on catching rabbits; so I tried to come at it from an angle that has a hint of pragmatism and practicality.