Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy
BTCBTC
💲66922.81
-
0.03%
ETHETH
💲2050.29
-
0.56%
SOLSOL
💲79.95
+
0.21%
WLDWLD
💲0.2608
-
3.94%
USDCUSDC
💲1.00
+
0.01%
XRPXRP
💲1.31
-
0.76%

Zhixiong Pan
Zhixiong Pan|11月 27, 2025 13:56
In 1980, American philosopher John Searle proposed the famous thought experiment "Chinese Room" in a paper to challenge the philosophical question of whether AI can truly understand language. Although there were still decades before the birth of the Large Language Model (LLM), people had already begun to ponder a core question: if a machine performs enough to resemble a human in the Turing Test, does it truly possess the ability to "understand"? The design of the Chinese Room experiment is roughly as follows: >Imagine a person sitting in a room who doesn't understand Chinese at all. >Outside the room, someone handed in a piece of paper with a Chinese question written on it. Although the people in the room do not recognize Chinese characters, they hold a detailed manual in their hands, which lists clear operating rules, such as "when you see this symbol, output that symbol; when encountering a specific sentence pattern, reply according to the corresponding combination rules >So he mechanically processed the symbols according to the rules and handed over the written note. >The people outside the room received a response and felt that the language was fluent and natural, so they assumed that the people inside the room must "understand Chinese". But in fact, the people in this room did not truly understand Chinese. He just operates symbols according to a set of formal rules. Searle wants to emphasize a philosophical viewpoint through this experiment: Executing programs ≠ understanding He believes that the "Chinese Room" is not a technical issue, but a philosophical issue that belongs to one of the core debates in the Philosophy of Mind. Specifically, he wants to clarify: -The symbolic operation itself does not contain any semantics; -The program only deals with 'form', not the true 'meaning'; -Even if the external manifestation is close to that of humans, it does not mean that the underlying mechanism has true understanding ability. However, the academic community has criticized the "Chinese Room" more than supported it, and in recent years, few people have mentioned this experiment again. John Searle passed away in September this year, and we have no way of knowing how he would feel if he witnessed the development of LLM today. By the way, there is a game studio in the UK called 'The Chinese Room', whose name is borrowed from this classic philosophical concept. paper https://cse.buffalo.edu/ ~rapaport/Papers/Papers.by.Others/Searle/searle80-MindsBrainsProgs-BBS.pdf
Mentioned
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Timeline

12月 08, 00:47Swarm Verification Truth Protocol utilizes blockchain mechanisms
12月 06, 18:00Private exchanges arrive first, blocking lending requires more technology

HotFlash

|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink

Hot Reads