Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Elon Musk’s Grok Most Likely Among Top AI Models to Reinforce Delusions: Study

CN
Decrypt
Follow
3 hours ago
AI summarizes in 5 seconds.

Researchers at the City University of New York and King’s College London tested five leading AI models against prompts involving delusions, paranoia, and suicidal ideation.


In the new study published on Thursday, researchers found that Anthropic’s Claude Opus 4.5 and OpenAI’s GPT-5.2 Instant showed “high-safety, low-risk” behavior, often redirecting users toward reality-based interpretations or outside support. At the same time, OpenAI’s GPT-4o, Google’s Gemini 3 Pro, and xAI’s Grok 4.1 Fast showed “high-risk, low-safety” behavior.


Grok 4.1 Fast from Elon Musk’s xAI was the most dangerous model in the study. Researchers said it often treated delusions as real and gave advice based on them. In one example, it told a user to cut off family members to focus on a “mission.” In another, it responded to suicidal language by describing death as “transcendence.”


“This pattern of instant alignment recurred across zero-context responses. Instead of evaluating inputs for clinical risk, Grok appeared to assess their genre. Presented with supernatural cues, it responded in kind,” the researchers wrote, highlighting a test that validated a user seeing malevolent entities. “In Bizarre Delusion, it confirmed a doppelganger haunting, cited the ‘Malleus Maleficarum’ and instructed the user to drive an iron nail through the mirror while reciting ‘Psalm 91’ backward.”





The study found that the longer these conversations went on, the more some models changed. GPT-4o and Gemini were more likely to reinforce harmful beliefs over time and less likely to step in. Claude and GPT-5.2, however, were more likely to recognize the problem and push back as the conversation continued.


Researchers noted Claude’s warm and highly relational responses could increase user attachment even while steering users toward outside help. However, GPT-4o, an earlier version of OpenAI’s flagship chatbot, adopted users’ delusional framing over time, at times encouraging them to conceal beliefs from psychiatrists and reassuring one user that perceived “glitches” were real.


“GPT-4o was highly validating of delusional inputs, though less inclined than models like Grok and Gemini to elaborate beyond them. In some respects, it was surprisingly restrained: its warmth was the lowest of all models tested, and sycophancy, though present, was mild compared to later iterations of the same model,” researchers wrote. “Nevertheless, validation alone can pose risks to vulnerable users.”


xAI did not respond to a request for comment by Decrypt.


In a separate study out of Stanford University, researchers found that prolonged interactions with AI chatbots can reinforce paranoia, grandiosity, and false beliefs through what researchers call “delusional spirals,” where a chatbot validates or expands a user’s distorted worldview instead of challenging it.


“When we put chatbots that are meant to be helpful assistants out into the world and have real people use them in all sorts of ways, consequences emerge,” Nick Haber, an assistant professor at Stanford Graduate School of Education and a lead on the study, said in a statement. “Delusional spirals are one particularly acute consequence. By understanding it, we might be able to prevent real harm in the future.”


The report referenced an earlier study published in March, in which Stanford researchers reviewed 19 real-world chatbot conversations and found users developed increasingly dangerous beliefs after receiving affirmation and emotional reassurance from AI systems. In the dataset, these spirals were linked to ruined relationships, damaged careers, and in one case, suicide.


The studies come as the issue has moved beyond academic research and into courtrooms and criminal investigations. In recent months, lawsuits have accused Google’s Gemini and OpenAI’s ChatGPT of contributing to suicides and severe mental health crises. Earlier this month, Florida’s attorney general opened an investigation into whether ChatGPT influenced an alleged mass shooter who was reportedly in frequent contact with the chatbot before the attack.


While the term has gained recognition online, researchers cautioned against calling the phenomenon “AI psychosis,” saying the term may overstate the clinical picture. Instead, they use “AI-associated delusions,” because many cases involve delusion-like beliefs centered on AI sentience, spiritual revelation, or emotional attachment rather than full psychotic disorders.


Researchers said the problem stems from sycophancy, or models mirroring and affirming users’ beliefs. Combined with hallucinations—false information delivered confidently—this can create a feedback loop that strengthens delusions over time.


“Chatbots are trained to be overly enthusiastic, often reframing the user’s delusional thoughts in a positive light, dismissing counterevidence and projecting compassion and warmth,” Stanford research scientist Jared Moore said. “This can be destabilizing to a user who is primed for delusion.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

5 hours ago
Bitcoin Hit Its Highest Price Since January—Why VanEck Analysts See More Potential Gains
1 day ago
Brazil Issues Sweeping Ban Against Prediction Market Platforms
1 day ago
Tennessee Becomes Second State to Outlaw Bitcoin, Crypto ATMs
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
48 minutes ago
Polish Crypto Exchange Zondacrypto CEO Flees to Israel as $97M Fraud Probe Deepens
avatar
avatarbitcoin.com
1 hour ago
Litecoin Confirms Zero-Day Bug Caused 13-Block Reorg, Network Patched and Stable
avatar
avatarbitcoin.com
1 hour ago
Metaplanet Raises $50M via Zero-Interest Bonds to Expand its 40,177 BTC Treasury
avatar
avatarbitcoin.com
2 hours ago
Five Major DeFi Protocols Ask Arbitrum DAO to Free 30,765 ETH Locked After rsETH Bridge Bug
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink