Searle's one of the most die-hard materialist philosophers of mind around. It's his materialism that leads him to make his argument. Computers and human brains are both made out of atoms but that doesn't mean they're not qualitatively different. By that logic I"d be no different from a tree. There's qualitative differences between computers and human brains. Our cognition is biochemical and horribly slow, just by virtue of speed we are not working like LLMs. We're not doing tensor math in our heads, we don't have access to terrabytes of unaltered, digital data.
It's because our bandwith and monkey brains are so slow that we're forced to operate at a level of semantics. We can't just make inferences from almost infinite amounts of data the same way we can't play chess like Stockfish or do math like a calculator. The dualism is precisely in the opposite view, that computation is somehow "substrate independent". Searle argues we can have AI that has understanding the way we do, just that it's going to look more like an organic brain as a result.
The important insight from LLMs is that they're not like us at all but that doesn't make them less effective or intelligent. We do have plenty of understanding, we need to because we rely on a particular kind of reasoning, but artificial systems don't need to converge on that.
Computation is a purely physical phenomenon, so no, saying that sentience is computation that is substrate-independent is not dualism - it's hardline materialism. Dualism is saying that sentience cannot be entirely reduced down to physical phenomena.
The Chinese Room gag asserts that, although the room behaves in every way like it is an intelligent Chinese speaker, we can see inside the room and determine that there is nothing there that intelligently understands Chinese.
Searle seems to see a distinction between the "substrates" (which means an LLM cannot be intelligent; it's running on regular computer hardware and there's nothing to be found which "understands"), but unless someone can point out exactly what part of the substrate is intelligent, I am going to continue to suggest that his substrate difference is exactly identical to having a soul or not having a soul.
I, as a materialist, assert that, if you dug through my noggin down to the level of subatomic particles you will never find anything that is intelligent or which understands. (Quantum mechanics does not help here, by the way. It just replaces a (rather naive) determinism with randomness---see Bell's Inequality.) There is no magic going on there. You and your tree are both doing nothing but chemistry and physics (you share something like 50% of your genes, by the way). And that means that a computer could, in principle, behave as intelligently as I do. That's "substrate independence".
Now, whether a given system does do so or not is another question entirely. :-)
LLMs nevertheless contain rudimentary theories, don't they? Like the Othello example demonstrating a spatial model that is emergent.
LLMs are fast like calculators but it seems the optimization process that generates the parameter weights still produces "fuzzy semantics" and the Othello emergence is just one example.
It's because our bandwith and monkey brains are so slow that we're forced to operate at a level of semantics. We can't just make inferences from almost infinite amounts of data the same way we can't play chess like Stockfish or do math like a calculator. The dualism is precisely in the opposite view, that computation is somehow "substrate independent". Searle argues we can have AI that has understanding the way we do, just that it's going to look more like an organic brain as a result.
The important insight from LLMs is that they're not like us at all but that doesn't make them less effective or intelligent. We do have plenty of understanding, we need to because we rely on a particular kind of reasoning, but artificial systems don't need to converge on that.