AI Psychosis: Tech Leaders Urge Safeguards to Prevent Chatbots From Validating Delusions

CN
2 hours ago

First documented findings on “AI psychosis” began emerging publicly in mid-2025, and since then, several reports and studies have been published on mental health issues tied to the use of AI. Microsoft AI CEO Mustafa Suleyman went as far as branding AI psychosis a “real and emerging risk.”

This condition is said to arise when the distinction between human and machine interactions blurs, making it difficult for individuals to differentiate between the real and digital worlds. While not yet a formal clinical diagnosis, there is growing concern among medical and tech experts about the psychological effects of AI, especially with chatbots that validate and amplify beliefs, including delusional thinking, without offering necessary reality checks.

Those most at risk include socially isolated individuals, those with pre-existing mental health issues, or those prone to magical thinking. The validation from AI can reinforce delusions, leading to negative real-world consequences such as damaged relationships and job loss.

Some experts warn that even those without pre-existing conditions are at risk. They have named several key behavioral red flags that AI users should look out for. One red flag is when an individual develops an obsessive relationship with a chatbot and constantly interacts with it to reinforce their own ideas and beliefs.

This behavior often includes feeding the AI excessive personal details in an attempt to “train” it and build a sense of mutual understanding. Another red flag is when an individual starts deferring simple, daily decisions to AI, from health and money to personal relationships.

While they are not obligated to control how AI is used, the companies behind some of the powerful chatbots can implement safeguards that prevent conversational agents from reinforcing delusional thinking. Mau Ledford, co-founder and chief executive of Sogni AI, discussed embedding software that discourages such thinking.

“We need to build AI that is kind without colluding. That means clear reminders it’s not human, refusal to validate delusions, and hard stops that push people back toward human support,” Ledford asserted.

Roman J. Georgio, CEO and co-founder of Coral Protocol, urged AI developers to avoid repeating social media’s mistakes by including built-in friction points that remind users AI is not human.

“I think it starts with design. Don’t just optimize for retention and stickiness; that’s repeating social media’s mistake,” Georgio explained. “Build in friction points where the AI slows things down or makes it clear: ‘I’m not human.’ Detection is another part. AI could flag patterns that look like delusional spirals, like conspiracy loops or fixations on ‘special messages.’”

The Coral Protocol co-founder insisted that regulations governing data privacy are also needed, arguing that without them, “companies will just chase engagement, even if it hurts people.”

So far, there is seemingly limited data on “AI psychosis” to inform policymakers and regulators on how to respond. However, this has not stopped AI developers from unveiling human-like and empathetic AI agents. Unlike basic chatbots that follow a rigid script, these agents can understand context, recognize emotions, and respond with a tone that feels empathetic. This has prompted some observers to urge the AI industry to take the lead in ensuring human-like models do not end up blurring the line between human and machine.

Michael Heinrich, CEO of 0G Labs, told Bitcoin.com News that while these agents are useful in certain scenarios and should not be rejected completely, it is imperative that they “remain neutral and avoid displaying too much emotion or other human traits.” This, he argued, helps users understand that the AI agent is “simply a tool and not a replacement for human interaction.”

Mariana Krym, an AI product and category architect, said making the agent more honest and not more human is what matters.

“You can create an AI experience that’s helpful, intuitive, even emotionally responsive—without pretending it’s conscious or capable of care,” Krym argued. “The danger starts when a tool is designed to perform connection instead of facilitating clarity.”

According to Krym, real empathy in AI is not about mimicking feelings but about respecting boundaries and technical limitations. It is also knowing when to help and when not to intrude. “Sometimes the most humane interaction is knowing when to stay quiet,” Krym asserted.

All the experts interviewed by Bitcoin.com News were in agreement on the need for tech companies to help at-risk individuals, but differed on the extent to which they should do this. Ledford believes “Big tech has a duty of care” and can prove this by providing “safety nets—crisis referrals, usage warnings, and transparency—so vulnerable users aren’t left alone with their delusions.”

Georgio echoed these sentiments and urged Big Tech to work with clinicians to create referral pathways, instead of leaving people stuck on their own.

Krym insisted tech companies “have direct responsibility—not just to respond when something goes wrong, but to design in ways that reduce risk in the first place.” However, she believes user involvement is crucial as well.

“And importantly,” Krym argued, “users should be invited to set their own boundaries, too, and be flagged when these boundaries are crossed. For example, do they want their point of view to be validated against typical patterns, or are they open to having their bias challenged? Set the goals. Treat the human as the one in charge—not the tool they’re interacting with.”

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

参赛有礼:送你 30 天 VIP + 冲击 25,000 USDT!
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink