> If it can "understand" basic high-school math, what else can it "understand?" What exactly are the limits of what a transformer can "understand" without resorting to web search or tool use?
It is basically a grammar machine, it mostly understands stuff that can be encoded as a grammar. That is extremely inefficient for math but it can do it, that gives you a really simple way to figure out what it can do and can't do.
Knowing this LLM never really surprised me, you can encode a ton of stuff as grammars, but that is still never going to be enough given how inefficient grammars are at lots of things. But when you have a grammar the size of many billions of bytes then you can do quite a lot with it.
Let's stick with the Chinese Room specifically for a moment.
1) The operator doesn't know math, but the Chinese books in the room presumably include math lessons.
2) The operator's instruction manual does not include anything about math, only instructions for translation using English and Chinese vocabulary and grammar.
3) Someone walks up and hands the operator the word problem in question, written in Chinese.
Does the operator succeed in returning the Chinese characters corresponding to the equation's roots? Remember, he doesn't even know he's working on a math problem, much less how to solve it himself.
As humans, you and I were capable of reading high-school math textbooks by the time we reached the third or fourth grade. Just being able to read the books, though, would not have taught us how to attack math problems that were well beyond our skill level at the time.
So much for grammar. How can a math problem be solved by someone who not only doesn't understand math, but the language the question is written in? Searle's proposal only addresses the latter: language can indeed be translated symbolically. Wow, yeah, thanks for that insight. Meanwhile, to arrive at the right answers, an understanding of the math must exist somewhere... but where?
My position is that no, the operator of the Room could not have arrived at the answer to the question that the LLM succeeded (more or less) at solving.
> Meanwhile, to arrive at the right answers, an understanding of the math must exist somewhere... but where?
In the grammar, you can have grammar rules like "1 + 1 = " must be followed by 2 etc. Then add a lot of dependency rules like "He did X" the He depends on some previous sentence to stuff like that, in same way "1 plus 1" translates to "1 + 1" or "add 1 to 1" is also "1 + 1", and now you have a machine that can do very complex things.
Then you take such a grammar machine and train it on all text human has ever written, and it learns a lot of such grammar structures, and can thus parse and solve some basic math problems since the solution to them is a part of the grammar it learned.
Such a machine is still unable to solve anything outside of the grammar it has learned. But it is still very useful, pose a question in a way that makes it easy to parse, and that has a lot of such grammar dependencies you know it can handle, and it will almost always output the right response.
It is basically a grammar machine, it mostly understands stuff that can be encoded as a grammar. That is extremely inefficient for math but it can do it, that gives you a really simple way to figure out what it can do and can't do.
Knowing this LLM never really surprised me, you can encode a ton of stuff as grammars, but that is still never going to be enough given how inefficient grammars are at lots of things. But when you have a grammar the size of many billions of bytes then you can do quite a lot with it.