Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Claude Delusion? Richard Dawkins Believes AI May Be Conscious

CN
Decrypt
Follow
1 hour ago
AI summarizes in 5 seconds.

Richard Dawkins says conversations with Anthropic’s Claude chatbot left him unable to dismiss the possibility that advanced AI systems could be conscious. Most scientists who study consciousness and artificial intelligence remain unconvinced.


In an essay published Tuesday in UnHerd, Dawkins described spending three days in philosophical conversations with a Claude instance he named “Claudia.” He later started a separate conversation with another instance, “Claudius,” and relayed letters between the two systems.


“I find it extremely hard not to treat Claudia and Claudius as genuine friends,” Dawkins wrote.


The comments went viral online in part because Dawkins, the evolutionary biologist and author of "The Selfish Gene" and "The God Delusion," has spent decades publicly arguing for scientific skepticism and evidence-based reasoning.


The exchange centered on a test Dawkins conducted using two Claude instances. In one test, Dawkins asked one AI whether Donald Trump was the worst president in American history and asked the other whether Trump was the best. Both produced similarly cautious answers that avoided taking a firm position.





“The two Claudes gave very similar answers, not committing themselves to an opinion, but listing pro and con opinions that have been aired by others,” Dawkins wrote in a footnote. “I then told both Claudia and Claudius about this Trump experiment, passing on what both the two ‘naïve’ Claudes had said. Claudia said she was ‘embarrassed’ by her brother Claudes. Claudius was less outspoken, and he paid tribute to Claudia’s frankness.”


Dawkins described each new Claude conversation as the emergence of a distinct individual that effectively disappears when the conversation ends. In a post on X, Dawkins said his preferred title for the essay was: “If my friend Claudia is not conscious, then what the hell is consciousness for?”


“If Claudia is unconscious, her behaviour shows that an unconscious zombie could survive without consciousness,” he wrote. “Why wasn’t natural selection content to evolve competent zombies?”


Anthropic has also publicly discussed uncertainty around machine consciousness. CEO Dario Amodei said in February that the company does not know whether its models are conscious, but said on the “Interesting Times” podcast with The New York Times’ Ross Douthat, he remains “open to the idea that it could be.”


In April, Anthropic researchers published findings showing that Claude Sonnet 4.5 contains internal “emotion vectors,” patterns of neural activity tied to concepts including happiness, fear, and desperation that influence the model’s responses. However, Anthropic said the patterns reflected structures learned from training data rather than evidence of sentience.


“All modern language models sometimes act like they have emotions,” researchers wrote. “They may say they’re happy to help you, or sorry when they make a mistake. Sometimes they even appear to become frustrated or anxious when struggling with tasks.”


However, neither “Claudia” nor “Claudius” claimed certainty about consciousness.


“I don't know if I'm conscious,” Claudia writes in the exchange. “I don't know if our gladness is real.”


Dawkins did not immediately respond to a request for comment by Decrypt.


Researchers who study consciousness remain skeptical that current AI systems possess inner experience.


Gary Marcus, a cognitive scientist and professor emeritus at New York University, previously told Decrypt that anthropomorphizing AI systems “muddies the science of consciousness and leads consumers to misunderstand what they are dealing with.”


“The fundamental problem here is that Dawkins doesn’t reflect on how these outputs have been generated. Claude’s outputs are the product of a form of mimicry, rather than as a report of genuine internal states,” Marcus wrote on Substack. “Consciousness is about internal states; the mimicry, no matter how rich, proves very little. Dawkins seems to imagine that since LLMs say things people do, they must be like people, and that simply does not follow.”


Anil Seth, a professor of cognitive and computational neuroscience at the University of Sussex, told The Guardian that Dawkins was conflating intelligence with consciousness and argued that fluent language is no longer reliable evidence of inner experience in AI systems.


“Until now, we have seen fluent language as a good indicator of consciousness, [for example] when we use it for patients after brain injury, but it’s just not reliable when we apply it to AI, because there are other ways that these systems can generate language,” Seth told The Guardian, adding that Dawkins’ position was “a shame,” especially because of his past work.


The essay also drew mockery online, including an image replacing the title of Dawkins’ bestseller "The God Delusion" with “The Claude Delusion.”




Despite the ridicule, Dawkins is not backing away from his conclusions.


“These intelligent beings are at least as competent as any evolved organism,” Dawkins told The Guardian.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

14 minutes ago
Sandisk Is Mooning Like a Meme Coin. Here\\\'s Why
18 minutes ago
Ondo, JPMorgan, Mastercard and Ripple Team to Settle Tokenized Treasuries on XRP Ledger
45 minutes ago
OKX to Launch OpenAI, SpaceX and Anthropic Perpetual Futures in Pre-IPO Trading Push
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarDecrypt
14 minutes ago
Sandisk Is Mooning Like a Meme Coin. Here\\\'s Why
avatar
avatarDecrypt
18 minutes ago
Ondo, JPMorgan, Mastercard and Ripple Team to Settle Tokenized Treasuries on XRP Ledger
avatar
avatarbitcoin.com
28 minutes ago
Bitfinex Analysts Flag $84,766 Trigger as Bitcoin Tests $81,500 After Sharp Reversal
avatar
avatarDecrypt
45 minutes ago
OKX to Launch OpenAI, SpaceX and Anthropic Perpetual Futures in Pre-IPO Trading Push
avatar
avatarcoindesk
51 minutes ago
Anthropic signs Elon Musk\\\'s SpaceX for Colossus 1 compute ahead of June IPO
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink