It's an illusion. For any thing it "knows" you can persuade it to claim exactly opposite thing. It just randomly landed on correct thing first just because it seen it more often in the input data. Despite appearing 100% confident about everything it has actual 0% of confidence about anything it says. Although it insists on some things bit longer than on others.
> For any thing it "knows" you can persuade it to claim exactly opposite thing.
Which is actually a novel capability and arises because the network does reinforcement learning over its own context window. It's a strength, not a weakness. Humans can do the same thing. ("Assume that X...")
> It just randomly landed on correct thing first just because it seen it more often in the input data.
Isn't that just a description of learning?
It's true that the network has no idea what is "true". But it's not like we do either, all we do is learning from correlations. We're just better at it.