Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Searle's Chinese Room Argument (wikipedia.org)
2 points by Eddy_Viscosity2 on May 3, 2023 | hide | past | favorite | 6 comments


The closest real world example of the Chinese Room I know about is Richard Feynman's adventures teaching physics in Brazil. His students memorized the textbooks and could manipulate the symbols, and answer test questions correctly,"because they were "almost exclusive teaching and learning by means of pure abject memory." They also never tested their knowledge against that of their friends in discussions, for fear of losing face.

Therefore, when it came to any experiment or application in the real world, the students were as hopeless as if they knew nothing. They had just been playing a symbol-swapping game.

http://calteches.library.caltech.edu/46/2/LatinAmerica.htm


I don't understand how this is different from philosophical zombie concept. You have a black box that appears to be intelligent like a smart person. But for some reasons you want to say it isn't intelligent or sentient or whatever. Fine, I feel like that says more about you than about the box or about the nature of intelligence.


The philosophical zombie argument is about behaviorism and eliminative materialism. In other words, if an unconscious system can act the way you and I do, without being conscious, then what's the point of consciousness? It's epiphenomenal, according to folks who believe P-zombies are metaphysically possible. The jury is still out on whether or not such systems ARE physically possible, however.

The CRA doesn't care about metaphysical possibility because we can all agree the CR could exist in principle. The point Searle drives home is against computationalism, and indeed functionalism in general. He proves that you can't necessarily derive semantics from syntax alone.


He proves that you can't necessarily derive semantics from syntax alone.

You don't need a whole room to do that, though. You already know that a sentence can be semantically ambiguous, with multiple distinct readings that can be resolved with context.

A Turing machine is distinct from a lookup table. It has a memory. Of course a lookup table doesn't "speak Chinese", but that's not a question anybody was asking.

It feels like Searle is trying to confuse your intuition: if you can look inside and see the mechanics then there can't be any "qualia" there. But the mechanics he's proposing simply do not work, so the question is moot.

I think the grandparent post is on the right track: the question Searle should have been asking is closer to one about P-zombies. The question he's actually asking isn't worth discussing, because the intuition he's pumping isn't about computationalism at all.


Sorry I still don't understand the difference. Say that we get next-word-predictors like say GPT-8 that are really good, and we can't tell them apart from a human by interacting with them through text. What does that say about philosophical zombies and chinese rooms? Would that bot be a philosophical zombie? Would it be a chinese room? Would it be both? What are those arguments supposed to prove to me about the bot? When I hear those arguments, am I supposed to interpret the bot in some specific way as a result of those philosophical arguments?


I really wish you had gotten follow-up for this. I have some questions, but likely it's already buried in your past threads.

If consciousness ends up "pretty darn similar" to ChatGPT-8, and we still can't peer into the opaque box, do we put it in a drone and have it fly around us? A literal companion.

But it can't be conscious, right? It's just really really good at predicting the-next-word. It otherwise has no goals of its own. It speaks as others would have spoken.

Let's say we can model a virtual twin of a person, such that the scanning bit is near realtime; we present to the observer with an artificial delay, so both person and twin appear to speak at the same time.

But the human doesn't need to wait on input. They are self-directed, goal-seeking agents. They supply their own while-loop and terminate when bored. They could knock on the glass and yell at the observer.

This "extra layer" may always confound claims of AGI. We would have to prove we lived in a hologram before we could establish the "final context" of computers being conscious.

Anyway, interested in hearing your thoughts. Even a brainstorm.

Thank-you :)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: