That's the thing - I don't agree with your premise that humans and robots/AI would be as seperate as you frame it. "Vanilla" humans may not understand the AI, but the people working with the stuff would surely be vastly enhanched humans, cyborgs.
I agree that we have to create AIs that share our values. However, I don't understand how/why we would/could not. We obviously create AIs to serve us and in order to serve us independently it without needing manual input of tasks (which would just make it an advanced computer) it needs to understand us.
I simply don't understand how the default AI would be detrimental to humans, what purpose would such an AI serve and why would we create it?
The "default AI" is a program that we build. That's all we know. Most programs that we build do not properly represent human values. If they don't properly take those into account, then we lose things we care about. The AI that "optimizes our supply chain for paper clips" will, in the limiting case, consume humans and the environment and the earth and the sun in order to produce as many paperclips as possible and distribute them as widely as possible. A "default AI" will not care about its survival or the survival of its creators.
I agree that we have to create AIs that share our values. However, I don't understand how/why we would/could not. We obviously create AIs to serve us and in order to serve us independently it without needing manual input of tasks (which would just make it an advanced computer) it needs to understand us.
I simply don't understand how the default AI would be detrimental to humans, what purpose would such an AI serve and why would we create it?