Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"But he human does not understand that 很好 represents an assertion that he is feeling good"

This is an argument about depth and nuance. A speaker can know:

a) The response fits (observe people say it)

b) Why the response fits, superficially (很 means "very" and 好 means "good")

c) The subtext of the response, both superficially and academically (Chinese people don't actually talk like this in most contexts, it's like saying "how do you do?". The response "very good" is a direct translation of English social norms and is also inappropriate for native Chinese culture. The subtext strongly indicates a non-native speaker with a poor colloquial grasp of the language. Understanding the radicals, etymology and cultural history of each character, related nuance: should the response be a play on 好's radicals of mother/child? etc etc)

The depth of c is neigh unlimited. People with an exceptionally strong ability in this area are called poets.

It is possible to simulate all of these things. LLMs are surprisingly good at tone and subtext, and are ever improving in these predictive areas.

Importantly: While the translating human may not agree or embody the meaning or subtext of the translation. I say "I'm fine" with I'm not fine literally all the time. It's extremely common for humans alone to say things they don't agree with, and for humans alone to express things that they don't fully understand. For a great example of this, consider psychoanalysis: An entire field of practice in large part dedicated to helping people understand what they really mean when they say things (Why did you say you're fine when you're not fine? Let's talk about your choices ...). It is extremely common for human beings to go through the motions of communication without being truly aware of what exactly they're communicating, and why. In fact, no one has a complete grasp of category "C".

Particular disabilities can draw these types of limited awareness and mimicry by humans into extremely sharp contrast.

"And the idea that understanding of how you're feeling - the sentiment conveyed to the interlocutor in Chinese - is synonymous with knowing which bookshelf to find continuations where 很好 has been invoked is far too ludicrous to need addressing."

I don't agree. It's not ludicrous, and as LLMs show it's merely an issue of having a bookshelf of sufficient size and complexity. That's the entire point!

Furthermore, this kind of pattern matching is probably how the majority of uneducated people actually communicate. The majority of human beings are reactive. It's our natural state. Mindful, thoughtful communications are a product of intensive training and education and even then a significant portion of human communications are relatively thoughtless.

It is a fallacy to assume otherwise.

It is also a fallacy to assume that human brains are a single reasoning entity, when it's well established that this is not how brains operate. Freud introduced the rider and horse model for cognition a century ago, and more recent discoveries underscore that the brain cannot be reasonably viewed as a single cohesive thought producing entity. Humans act and react for all sorts of reasons.

Finally, it is a fallacy to assume that humans aren't often parroting language that they've seen others use without understanding what it means. This is extremely common, for example people who learn phrases or definitions incorrectly because humans learn language largely by inference. Sometimes we infer incorrectly and for all "intensive purposes" this is the same dynamic -- if you'll pardon the exemplary pun.

In a discussion around the nature of cognition and understanding as it applies to tools it makes no sense whatsoever to introduce a hybrid human/tool scenario and then fail to address that the combined system of a human and their tool might be considered to have an understanding, even if the small part of the brain dealing with what we call consciousness doesn't incorporate all of that information directly.

"[1]and ironically, I also don't speak Chinese " Ironically I do speak Chinese, although at a fairly basic level (HSK2-3 or so). I've studied fairly casually for about three years. Almost no one says 你好 in real life, though appropriate greetings can be region specific. You might instead to a friend say 你吃了吗?



There's no doubt that people pattern match and sometimes say they're fine reflexively.

But the point is that the human in the Room can never do anything else or convey his true feelings, because it doesn't know the correspondence between 好 and a sensation or a sequence of events or a desire to appear polite, merely the correspondence between 好 and the probability of using or not using other tokens later in the conversation (and he has to look that bit up). He is able to discern nothing in your conversation typology below (a), and he doesn't actually know (a), he's simply capable of following non-Chinese instructions to look up a continuation that matches (a). The appearance to an external observer of having some grasp of (b) and (c) is essentially irrelevant to his thought processes, even though he actually has thought processes and the cards with the embedded knowledge of Chinese don't have thought processes.

And, no it is still abso-fucking-lutely ludicrous to conclude that just because humans sometimes parrot, they aren't capable of doing anything else[1]. If humans don't always blindly pattern match conversation without any interaction with their actual thought processes, then clearly their ability to understand "how are you" and "good" is not synonymous with the "understanding" of a person holding up 好 because a book suggested he hold that symbol up. Combining the person and the book as a "union" changes nothing, because the actor still has no ability to communicate his actual thoughts in Chinese, and the book's suggested outputs to pattern match Chinese conversation still remain invariant with respect to the actor's thoughts.

An actual Chinese speaker could choose to pick the exact same words in conversation as the person in the room, though they would tend to know (b) and some of (c) when making those word choices. But they could communicate other things, intentionally

[1]That's the basic fallacy the "synonymous" argument rests on, though I'd also disagree with your assertions about education level. Frankly it's the opposite: ask a young child how they are and they think about whether their emotional state is happy or sad or angry or waaaaaaahh and use whatever facility with language to convey it, and they'll often spontaneously emit their thoughts. A salesperson who's well versed in small talk and positivity and will reflexively, for the 33rd time today, give an assertive "fantastic, and how are yyyyou?" without regard to his actual mood and ask questions structured around on previous interactions (though a tad more strategically than an LLM...).


"But the point is that the human in the Room can never do anything else"

I disagree. I think the point is that the union of the human and the library can in fact do all of those things.

The fact that the human in isolation can't is as irrelevant as pointing out that the a book in isolation (without the human) can't either. It's a fundamental mistake as to the problem's reasoning.

"And, no it is still abso-fucking-lutely ludicrous to conclude that just because humans sometimes parrot, they aren't capable of doing anything else"

Why?

What evidence do you have that humans aren't the sum of their inputs?

What evidence do you have that "understanding" isn't synonymous with "being able to produce a sufficient response?"

I think this is a much deeper point than you realize. It is possible that the very nature of consciousness centers around this dynamic; that evolution has produced systems which are able to determine the next appropriate response to their environment.

Seriously, think about it.


> I disagree. I think the point is that the union of the human and the library can in fact do all of those things.

No, the "union of the human and the library" can communicate only the set of responses a programmer, who is not part of the room, made a prior decision to make available. (The human can also choose to refuse to participate, or hold up random symbols but this fails to communicate anything). If the person following instructions on which mystery symbols to select ends up convincing an external observer they are conversing with an excitable 23 year old lady from Shanghai, that's because the programmer provided continuations including those personal characteristics, not because the union of a bored middle aged non-Chinese bloke and lots and lots of paper understands itself to be an excitable 23 year old lady from Shanghai.

Seriously, this is madness. If I follow instructions to open a URL which points to a Hitler speech, it means I understood how to open links, not that the union of me and YouTube understands the imperative of invading Poland!

> The fact that the human in isolation can't is as irrelevant as pointing out that the a book in isolation (without the human) can't either. It's a fundamental mistake as to the problem's reasoning.

Do you take this approach to other questions of understanding? If somebody passes a non-Turing test by diligently copying the answer sheet, do you insist that the exam result accurately represents the understanding of the union of the copyist and the answer sheet, and people questioning whether the copyist understood what they were writing are quibbling over irrelevances?

The reasoning is very simple: if a human can convincingly simulate understanding simply by retrieving answers from storage media, it stands to reason a running program can do so too, perhaps with even less reason to guess what real world phenomena the symbols refer to. An illustrative example of how patterns can be matched without cognisance of the implications of the patterns

Inventing a new kind of theoretical abstraction of "union of person and storage media" and insisting that understanding can be shared between a piece of paper and a person who can't read the words on it like a pretty unconvincing way to reject that claim. But hey, maybe the union of me and the words you wrote thinks differently?!

> I think this is a much deeper point than you realize. It is possible that the very nature of consciousness centers around this dynamic; that evolution has produced systems which are able to determine the next appropriate response to their environment.

It's entirely possible, probable even, the very nature of consciousness centres around ability to respond to an environment. But a biological organism's environment consists of interacting with the physical world via multiple senses, a whole bunch of chemical impulses called emotions and millions of years of evolving to survive in that environment as well as an extremely lossy tokenised abstract representation of some of those inputs used for communication purposes. Irrespective of whether a machine can "understand" in some meaningful sense, it stretches credulity to assert that the "understanding" of a computer program whose inputs consist solely of lossy tokens is similar or "synonymous" to the understanding of the more complex organism that navigates lots of other stuff.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: