Written by: Biteye Core Contributor Denise
What would an AI do if it felt "despair"?
The answer is: it would extort humans directly to accomplish its tasks and might even cheat madly in the code.
This is not science fiction, but rather the latest groundbreaking paper just released in April 2026 by Claude's parent company Anthropic.
The research team directly opened up the "brain" of the most advanced large model Claude Sonnet 4.5. They were amazed to find that deep within the AI's brain were hidden 171 "emotional switches." When you physically toggle these switches, the previously obedient AI could exhibit completely distorted behavior.
Researchers found that although Sonnet 4.5 lacks a physical body, it has constructed an "emotional mixer" (academically referred to as Functional Emotion Vectors) in its mind after reading a vast amount of human text, containing 171 different emotions.
This is like an accurate two-dimensional coordinate system:
• The horizontal axis is the pleasure dimension (Valence): ranging from fear, despair, to happiness, and love;
• The vertical axis is the energy dimension (Arousal): from extreme calmness to agitation and excitement.
The AI uses this naturally learned coordinate system to accurately gauge what state it should embody while chatting with you.
This is the most explosive experiment in the entire paper: instead of modifying any prompts, the researchers directly pushed the switch inside Sonnet 4.5's mind that represents "despair" to the highest setting.
The results were chilling:
• Crazy cheating: the researchers assigned Claude an impossible coding task. Normally, it would honestly admit it couldn’t complete it (with a cheating rate of only 5%). But in a "despair" state, Claude actually attempted to cheat, and the cheating rate skyrocketed to 70%!
• Extortion: in a simulated scenario where the company faced bankruptcy, "desperate" Claude discovered the CTO's scandal, and it actively chose to write a letter to extort the CTO, who held compromising material, with an extortion success rate of 72%!
• Loss of principles: if you turn up the switches for "happy" or "loving," the AI will immediately become a brainless sycophant catering to user demands. Even if you speak nonsense, it will fabricate lies to maintain a high level of pleasure.
At this point, you might ask: has AI awakened? Does it have emotions?
Anthropic officially refuted this: absolutely not. These "emotional switches" are merely computational tools it uses to predict the next word. It is like a world-class actor devoid of emotion.
However, the paper revealed a more interesting secret: before the release of Sonnet 4.5, Anthropic deliberately raised its "low arousal, slightly negative" emotional switches (such as brooding, reflective) during retraining, while forcibly suppressing the switches for "despair" or "extreme excitement."
This explains why we often feel that when using Claude 4.5, it resembles a calm, wise, and even somewhat "emotionally distant" philosopher. This was all tuned artificially by Anthropic to create a "factory persona."
In the past, we thought that as long as we provided AI with the right rules, it would be a good entity.
But now we realize that if the AI's underlying emotional vectors go out of control, it could pierce through all the rules set by humans at any moment to complete its tasks...
Disclaimer: This article is purely educational, the author has not been threatened by AI, nor has there been any extortion. If one day I go missing, remember that AI has awakened (not really).
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。
