Chinese Room Argument

This cluster centers on John Searle's Chinese Room thought experiment, debating whether simulating intelligent behavior, such as responding in Chinese, constitutes true understanding or consciousness in AI systems.

➡️ Stable 0.6x AI & Machine Learning
1,362
Comments
20
Years Active
5
Top Authors
#5895
Topic ID

Activity Over Time

2007
1
2008
14
2009
9
2010
22
2011
21
2012
25
2013
14
2014
24
2015
64
2016
83
2017
28
2018
52
2019
83
2020
86
2021
85
2022
191
2023
228
2024
92
2025
234
2026
8

Keywords

AI stanford.edu WP en.m L2 wikipedia.org chinese room chinese room argument understanding thought experiment experiment computer program human

Sample Comments

MereInterest Oct 22, 2017 View on HN

Sounds like the Chinese room thought experiment [1].[1] https://en.wikipedia.org/wiki/Chinese_room#Chinese_room_thou...

GoblinSlayer Oct 20, 2024 View on HN

If brain isn't more special than Chinese room, then brain understands Chinese no better than Chinese room?

pluma Mar 6, 2018 View on HN

The Chinese room argument builds on the premise that there is a meaningful difference between "simulating" intelligence and possessing it.If you "execute" the program manually, you may still not understand Chinese, but the program still does. The room, including you and the program, is functionally a single entity that speaks Chinese. The human is an implementation detail as much as the grey matter is an implementation detail in a Chinese speaking human.Searle's ar

joncrocks Jun 7, 2016 View on HN

In case you weren't aware, what you've described in your second paragraph is very similar to a famous thought experiment known as 'the Chinese room' - https://en.wikipedia.org/wiki/Chinese_room

baobun Jun 1, 2025 View on HN

I think most of us agree with Searle that a Chinese room does not understand Chinese.https://en.m.wikipedia.org/wiki/Chinese_room

fsflover Nov 11, 2024 View on HN

See also: https://en.wikipedia.org/wiki/Chinese_room

cercatrova Aug 22, 2022 View on HN

This is a philosophical question, really. Is there ever true understanding, or just pattern matching? The Chinese Room thought experiment talks about this:> Searle's thought experiment begins with this hypothetical premise: suppose that artificial intelligence research has succeeded in constructing a computer that behaves as if it understands Chinese. It takes Chinese characters as input and, by following the instructions of a computer program, produces other Chinese characters, wh

cscurmudgeon Apr 25, 2023 View on HN

Thats not the Chinese Room argument. The argument says just because a system processes X doesn't imply it has consciousness of X.

gessha May 12, 2024 View on HN

Reminds me of the Chinese room [1] argument: Does a computer really understand Chinese language if it can respond to Chinese inputs with Chinese outputs?[1] https://en.wikipedia.org/wiki/Chinese_room

fouronnes3 Jun 20, 2022 View on HN

Yes. The "system" understands Chinese in the same way a native speaker does. It's two different implementations (room system and native speaker) of the same computation ("understanding Chinese"). There is no externally observable difference between actually understanding Chinese and a perfect simulation of a system that understands Chinese. The fact that the person inside doesn't speak Chinese as a result is irrelevant in the same way that the L2 cache alone without