Sounds a lot like my toddler, so "maybe" would be my answer.
I personally don't think human consciousness or thought is very "special" or "unique", and probably a lot of animals and complex systems have the same properties that lead to whatever consciousness is, but we humans think we're "super special", so most of us won't accept that we're really close to creating consciousness, if we haven't already.
Humans are pattern-matchers, indeed, pattern-overmatchers, so it's not unreasonable that you felt that way, but this does not reflect anything like a "thought" from the AI model.
Yeah, it is hard to overcome our antropomorphic bias. We all know Disney animal toons who behave like humans and we don't have a problem identifying ourselves with the animal hero. So we will bias to maintain something having "thoughts".
However this is not enough to simply dismiss ChatGPT having "thoughts".
We need to understand first what it means to have "thoughts".
It is my opinion that ChatGPT does not have conciousness. For that it seems to lack self understanding which does not come from simply parroting some utterances found in the data used for the machine learning.
But could it have some inkling of a "thought" anyway? I am not sure but I don't think so. Yet.
I don't think this is how it works, or how any of this works. It just 'inferred' based on previous input the changes. It does the same for coding, if it gets things wrong and you feed it data from docs, or tell it what it did wrong it gets better at answering related questions.
My gut says there's something interesting in this interaction that's worth trying to understand.
The key here is that he got ChatGPT to ask him questions. I was like, wait, how does that happen? And his solution was ingenious. He asked it to play a game and taught it the rules. I think it's more than a passing coincidence here that human children also rely on play for learning (they're hardwired for it!).
Is it thinking, probably not. But it sure does look like it's self-directed reasoning, which is a huge step in the right direction.
It's not AGI, but maybe points us in the direction of a cognitive subsystem that we need to investigate/map/theorize/design/implement/iterate that can automate the decision process that the author was prompting the AI to follow.
Essentially, make it so the AI doesn't need a human to follow that line of reasoning, in a general sense. Find a way to let it ask it's own questions, and seek answers. Probably a later phase involves the AI asking humans for help along the way when it's reasoning gets stumped.
Indeed, ChatGPT did not form a thought as it can't think or understand what a single word means. This technology will never produce intelligence for which base requirements include conceptual understanding and awareness by modeling space-time relationships.
>I want you to play a game of 20 questions with me. You will ask me questions, one at a time, to try and determine which animal I am thinking of. You will use my answers to inform your next question. You can guess the animal I am thinking of at any time, but you must make a guess after your twentieth question.
It appeared to break at question 7, but it might be experiencing high usage at the moment.
That is common at the moment. Their servers are overloaded due to the wild popularity of ChatGpt and often the session will just "break", erasing any context or learning it had acquired in your session. You must then start over.
I personally don't think human consciousness or thought is very "special" or "unique", and probably a lot of animals and complex systems have the same properties that lead to whatever consciousness is, but we humans think we're "super special", so most of us won't accept that we're really close to creating consciousness, if we haven't already.