Yeah, I am writing word by word, but I am not predicting the next word I thought about what I wanted to respond and am now generating the text to communicate that response, I didn't think by trying to predict what I myself would write to this question.
Your brain is undergoing some process and outputting the next word which has some reasonable statistical distribution. You're not consciously thinking about "hmm what word do I put so it's not just random gibberish" but as a whole you're doing the same thing.
From my point of view as someone reading the comment I can't tell if it's written by an LLM or not, so I can't use that to conclude if you're intelligent or not.
"Your brain is undergoing some process and outputting the next word which has some reasonable statistical distribution. You're not consciously thinking about "hmm what word do I put so it's not just random gibberish" but as a whole you're doing the same thing.
From my point of view as someone reading the comment I can't tell if it's written by an LLM or not, so I can't use that to conclude if you're intelligent or not."
There is no scientific evidence that LLMs are a close approximation to the human brain in any literal sense. It is uncouth to critique people on the basis of what appears to be nothing more than an analogy.
A smart entity being able to emulate a dumber entity doesn't support in any way that the dumber entity is also smart.