Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The Chinese Room argument is a great thought experiment for understanding why the computational model is an inadequate explanation of consciousness and qualia.

To be as accurate as possible with respect to the primary source [0], the Chinese room thought experiment was devised as a refutation of "strong AI," or the position that

    the appropriately programmed computer really is a mind, in the
    sense that computers given the right programs can be literally
    said to understand and have other cognitive states.
Searle's position?

    Rather, whatever purely formal principles you put into the
    computer, they will not be sufficient for understanding, since
    a human will be able to follow the formal principles without
    understanding anything. [...] I will argue that in the literal
    sense the programmed computer understands what the car and the
    adding machine understand, namely, exactly nothing.
[0] https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...


Nilsson's complaint that Searle is conflating the running program with the underlying system/interpreter that runs it seems accurate.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: