>Thus we can observe that LLMs do not abstractly reason about the question and it's model.
Your conclusion makes no sense.
Humans provide increasingly wrong answers as questions get more complex too. Jumping from that to "incapable of abstract reasoning" is silly. You have not "trivially proven" anything at all
>The LLM has (is) a model about language and performs some (limited) reasoning on that model to get an output.
Humans provide increasingly wrong answers as questions get more complex too.
Human this, Human that. LLMs aren't humans. "My model is crap but the human brain isn't very good at this either" is irrelevant when we have machines that are not only very good at these tasks but almost perfect at them.
Humans make such mistakes precisely because they are not perfect reasoning machines. To compare LLMs to humans is not only disingenuous, but proves my point.
(And no, I will not humour you with an argument about how the amount of wrong answers is drastically lower from human mathematicians)
Jumping from that to "incapable of abstract reasoning" is silly.
They are language models. It is explicitly what they are designed to do.
If these LLMs are not, as I claim, reasoning on language rather than the abstract model of the query, then how come they fail miserably in exactly the ways you would expect where that the case?
LLMs generalize to non linguistic patterns.
Yes, congratulations, if you turn a problem into a linguistic one LLMs can deal with them. This does not in any way go against what I said about the capabilities of LLMs.
The same levels of actual abstract reasoning can be achieved on a graphing calculator running off literal potatoes.
You said you trivially proved something and made up nonsensical lines of reasoning to justify it. If your "proof" can't port to Humans then it's not proof. You are just rambling.
>Humans make such mistakes precisely because they are not perfect reasoning machines.
Nobody is calling LLMs perfect reasoning machines. Your "point" was that they don't reason at all which of none of your ramblings has been able to "prove".
>If these LLMs are not, as I claim, reasoning on language rather than the abstract model of the query, then how come they fail miserably in exactly the ways you would expect where that the case?
They don't. The idea that you must make no mistake reasoning before you can be considered to be reasoning has no ground.
>LLMs generalize to non linguistic patterns.
Yes, congratulations, if you turn a problem into a linguistic one LLMs can deal with them.
Can you read ? Did you even bother looking at the link? LLMs don't need patterns to be linguistic to reason over them lol. None of those patterns are turned linguistic. Some of them are arbitrary numbers that resemble nothing like the data they've been trained on.
If your "proof" can't port to Humans then it's not proof
Learn to take a hint. I'm not going to argue this on human terms because you're playing a dumb um-akshually game.
Computer reasoning systems can solve vastly more complex problems perfectly. Expert mathematicians can solve vastly more complex problems with only minimally increased errors. The ability of LLMs to solve reasoning problems completely disintegrates when the problems get more complex.
Trying to argue that LLMs are alike humans because of you can put these three into the buckets of "No mistakes" and "Some mistakes" is ridiculous.
Nobody is calling LLMs perfect reasoning machines.
Yes.
You said humans make mistakes, my point here is, humans make mistakes precisely because they stop doing reasoning and start doing blind pattern matching estimation of the answer.
The idea that you must make no mistake reasoning before you can be considered to be reasoning has no ground.
Reading comprehension.
I did not say no mistakes. I said that the failure pattern follows that of estimated guesses; Rapidly increasing errors as the size of the problem increases.
Whereas with computer reasoning, the rate of errors does not increase at all. And with (expert) humans the rate only goes up a little.
Did you even bother looking at the link?
You are missing the point.
I am not referring to literally English or any other language. I'm referring to the structure of language problems, which is vastly simpler than any moderately complex math or programming problem.
To more explicitly spell out the reason for my unimpressed-ness: They trained a pattern-repeating-machine and found that it will repeat some of their patterns, some of which were patterns trained on.
This does not demonstrate the ability to reason abstractly about new models, so I do not care.
Your conclusion makes no sense. Humans provide increasingly wrong answers as questions get more complex too. Jumping from that to "incapable of abstract reasoning" is silly. You have not "trivially proven" anything at all
>The LLM has (is) a model about language and performs some (limited) reasoning on that model to get an output.
LLMs generalize to non linguistic patterns.
https://general-pattern-machines.github.io/