We're going to get general AI the same way we get I: multi-sensory agents existing with agency, instincts, and guides in real 3D space. I cannot conceive of any other way to understand things deeply. Babies run experiments. How does AI play with a cat? How does it ever understand the concept of a cats mind without ever playing with it? If we want our AI to have conceptualization as we understand it we need AI to have similar sensory inputs and similar arrays of potential actions. And sure, we could copy the code from one AI to the next to have identical minds at t0, but I struggle with the ethics of that and really I'd rather have diversity in AIs than to have a bunch of clones running around thinking with the same types of thought patterns.
The problem I have once I think about it is that this line of thinking leads me to be much less sure of the nature of my own existence. Do we first let the mind of an AI develop to appreciate humanity before letting it know that it is an AI? Seems like it would solve a lot of possible problems since Ghandi wouldn't take the murder pill.
The problem I have once I think about it is that this line of thinking leads me to be much less sure of the nature of my own existence. Do we first let the mind of an AI develop to appreciate humanity before letting it know that it is an AI? Seems like it would solve a lot of possible problems since Ghandi wouldn't take the murder pill.
http://lesswrong.com/lw/2vj/gandhi_murder_pills_and_mental_i...