"only way to build computers that can interpret scenes like we do is to allow them to get exposed to all the years of (structured, temporally coherent) experience we have"
This may appear daunting until you realize robots can share memories. Five robots running around for a year is equivalent to one robot running around for five years. Does not Google have 25 cars driving around experiencing the world right now?
I also see skeptics running to computer vision as an example of how far we are from human level AI. Is that just the hardest problem to solve? Is it the most useful problem to solve?
That seems reasonable, until you factor in that language and communication are itself part of intelligent life. Sharing knowledge and cooperation are fundamental to learning and intelligence, keeping in mind that these learned strategies are fundamentally asymmetric (and thus cannot be shared by simple copying).
We need to appreciate that intelligence is not an individual trait, but part of a shared strategy, utilizing diversity to be able to react quickly to changing demands.
For example, lets say we have a group of people with a shared task of moving a (large) set of boxes from one source to one destination. When performing this task initially, different strategies are tried and a winning strategy is chosen, without one person coordinating the group and without each individual having total knowledge of the strategy. However, when a similar task is presented, the group will quickly perform the winning strategy again. Who possesses the intelligence? Would we gain anything when all the knowledge would be shared? (Given limited time and space, the answer is no for most strategies)
You can transfer large quantities of data to an intelligent computer system nearly instantaneous. It seems plausible that this data could encapsulate said years of experience. What is missing is that ability to create a computer than can process said data and create consciousness with it.
Sure in the first truly AI-capable systems it will most likely be easier training them over the years in human-time, but it seems to me very unlikely to be needed when AI becomes established and at that point you should be able to create X copies of intelligence(s) at will.
And for all this talk about AI terminator doomsdays, these seems to be much less talk about what can be accomplished with it.
Let's say you create AI system, it lives in a air-gapped system, the system is carefully crafted to establish reality for the AI(s) that exists solely in the virtual world. Then you create a scientist AI, mathematician AI, engineer AI, etc. Then you have a hard problem you want to solve, great, spin up 1000 scientist AI's, 10000 engineer AI's, X project coordinator AI's, etc. Let's just say they have roughly the same capabilities as their human counterparts and work at similar speed, but do not sleep, grow tired, form unions, nor do anything other than work on the task assigned. Create a system (API?) that allows them to somehow interact with our physical reality but without understanding it whatsoever to allow them to do experiments and test results. How long would it take for such a system to recreate all of google's infrastructure, or develop the next space shuttle, or cure cancer?
I think it's important to note the a true AI(however you define it) does not have to be self aware. It doesn't even have to be aware of our physical reality. Once we reach the point where we understand consciousness well enough to recreate it, it seems likely that will be able to tune it however which way we'd like: remove self-consciousness making it act on more what we would consider instinct, configure it's reward pathways in whatever way the directs the agent to whatever task the AI designer wishes and yes, even improve it. It will be very interesting when the system spins on not 1 average human level engineer but something like a Einstein-Newton hybrid that works several orders of magnitude faster than human time. I would guess the danger there would be less from the AI(as you could isolate it from our physical reality by isolating in a virtual world) and more the extremely advanced knowledge/technology gained from such a system.
"only way to build computers that can interpret scenes like we do is to allow them to get exposed to all the years of (structured, temporally coherent) experience we have"
This may appear daunting until you realize robots can share memories. Five robots running around for a year is equivalent to one robot running around for five years. Does not Google have 25 cars driving around experiencing the world right now?
I also see skeptics running to computer vision as an example of how far we are from human level AI. Is that just the hardest problem to solve? Is it the most useful problem to solve?