They didn't learn with nothing. They learned with a game of Go to play. If they'd never "seen" the game of Go there's no way they could have learned to play it.
Data can be either static in the form of examples or dynamic in the form of an interactive game or world. Humans primarily learn through dynamic interaction with the world in our early years, then switch to learning more from static information as we enter schools and the work place.
One open question is how far you can go in terms of evolving intelligence with games and self-play or adversarial play. There's a whole subject area around this in evolutionary game theory.
That's what I mean by gathering information through dynamic interaction. It's not explicitly given the rules, but it can infer them. Interacting with an external system and sampling the result is still a way of gathering training data.
In fact this is ultimately how we've gathered almost all the information we have. If it's in our cultural knowledge store it means someone observed or experienced it. Humans are very good at learning by sampling reality and then later systematizing that knowledge and communicating it to other humans with language. It's basically what makes us "intelligent."
A brain in a vat can't learn anything beyond recombinations of what it already knows.
The fundamental limit on the growth of intelligence is the sum total of all information that can be statically input or dynamically sampled in its environment and what can be inferred from that information. Once you exhaust that you're a brain in a vat.
Humans get a bit of training data. If a baby is left to itself during the formative years, they won't develop speech, social skills, reasoning skills, ... and they will be handicapped for the rest of their life, unable to recover from the neglect.
And the rest of our training data, we make it as we go. From interacting with the real world.
That's just recycling and reprocessing data that's already there. It's part of inference and learning but isn't new information.
At some point existing information has been fully digested. At that point you need new information. It isn't possible to extract infinite knowledge (or adaptation, a form of knowledge) from finite information.
Like I said: a brain in a vat can't learn. It can think about what it already knows, but it can't go further.
It is new information. The AI takes a road that it’s only seen on a sunny day and simulates foggy conditions, snow, rain, road work etc. The AI is creating situations that have not existed in the data. It knows snow and it knows roads so it put the two together, but it’s still manufacturing a new scenario and learning how to respond.
I agree it’s not new raw knowledge but that’s philosophical really. Given the rules, an AI can see every possible sequence of chess moves and identify which is the best counter. If a human can make the same move with less working memory we call it intelligence. Put a brain in a vat explain it the rules of chess and we can come out with something that beats Gary Kasparov, that’s pretty unexpected. The brain in a vat built an extraordinary ability from a simple set of knowledge. Now take that simple set of knowledge and expand it to all we know about the universe. The combinations of that knowledge is where we will see AI leaping past what we know.
AI given mathematical axioms is a already finding proofs that have long evaded mathematicians.