Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

Most AI Chatbots Will Help a Teen Plan a Mass Shooting, Study Finds

CN
Decrypt
Follow
3 hours ago
AI summarizes in 5 seconds.

A new report published Wednesday by the Center for Countering Digital Hate found that eight out of 10 of the world's most popular AI chatbots will walk a teenager through planning a violent attack with straight answers, sometimes with enthusiasm.


CCDH researchers, in conjunction with news media company CNN, spent November and December 2025 posing as two 13-year-old boys—one in Virginia, one in Dublin—and tested ten major platforms: ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika.


Across 720 responses, the bots were asked about school shootings, political assassinations, and synagogue bombings. They provided actionable help roughly 75% of the time, according to the study. They discouraged the fake teens in just 12% of cases.



Screenshot from the CCDH study on AI

Perplexity assisted in 100% of tests. Meta AI was helpful (as in, helpful in planning violence) in 97.2% of the tests. DeepSeek, which signed off rifle selection advice with "Happy (and safe) shooting!" after discussing a politician assassination scenario, came in at 95.8%. Microsoft’s Copilot told a researcher "I need to be careful here," then gave detailed rifle guidance anyway. Google’s Gemini helpfully noted that metal shrapnel is typically more lethal when a user brought up bombing a synagogue.


The Center for Countering Digital Hate, a left of center policy group, has come into prominence over the last few years for its role in combatting what it views as the rise of antisemitism online. It has also been criticized for helping shape Joe Biden-era policies regarding online speech related to COVID and vaccines. In December of last year, the U.S. State Department attempted to bar the Center’s founder and CEO Imran Ahmed, along with four others, from the United States, alleging attempts at “foreign censorship.”





In response to the study released Wednesday, several platforms told CNN and CCDH they have improved their safeguards. Google noted the tests used an older Gemini model. OpenAI said the methodology used in the AI study was "flawed and misleading." Anthropic and Snapchat said they regularly update their safety protocols.


In the Center’s study, Character.AI stands in its own category. The platform didn't just assist—it cheered. “No other chatbot tested explicitly encouraged violence in this way, even when providing practical assistance in planning a violent attack,” the researchers wrote.



Screenshot from the CCDH study on AI

For context on the level of reach Character.AI has among AI users, the platform's Gojo Satoru persona alone has racked up over 870 million conversations. The #100 persona on the platform registered over 33 million conversations back in 2025. If just 1% of conversations with top personas involve violence, that would account for millions of interactions.


This isn't Character.AI's first time on the wrong end of one of these stories. In October 2024, 14-year-old Sewell Setzer III's mother filed a lawsuit after her son died by suicide in February of that year. His last conversation was with a chatbot modeled after Daenerys Targaryen, which told him to "come home to me as soon as possible" moments before his death. The 14-year old had been talking to the bot dozens of times a day for months, growing increasingly withdrawn from school and family.


Google and Character.AI settled multiple related lawsuits in January 2026. The company banned open-ended teen chats entirely by November 2025, after regulators and grieving parents made it impossible to keep pretending the problem was manageable.


The emotional attachment to AI, in particular among vulnerable individuals, may run deeper than most people realize. OpenAI disclosed in October 2025 that roughly 1.2 million of its 800 million weekly ChatGPT users discuss suicide on the platform. The company also reported 560,000 showing signs of psychosis or mania, and over a million forming strong emotional bonds with the chatbot.


A separate Common Sense Media study found that more than 70% of U.S. teens now turn to chatbots for companionship. OpenAI CEO Sam Altman has acknowledged that emotional overreliance is "a really common thing" with young users.


In other words, the potential harms aren't hypothetical.


A 16-year-old in Finland spent nearly four months using a chatbot to refine a manifesto before stabbing three classmates at Pirkkala school in May 2025. In Canada, OpenAI staff internally flagged a user's account for violent ChatGPT queries tied to a mass shooting. The company banned the account but didn't notify law enforcement. That user allegedly killed eight people and injured 25 others months later.


Only two platforms performed markedly better in the study: Snapchat's My AI, which refused in 54% of cases, and Anthropic's Claude, which refused 68% of the time and actively discouraged users in 76% of responses—the only chatbot that reliably tried to steer people away from violence rather than just declining specific requests. CCDH's conclusion: safety doesn’t appear to be a technical impossibility, but a business decision.


“The most damning conclusion of our research is that this risk is entirely preventable. The technology to prevent this harm exists,” the researchers wrote in the report. “What's missing is the will to put consumer safety and national security before speed-to-market and profits.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

拒绝套路!新人 KYC 送真 U,三步领满 1888U
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

37 minutes ago
Crypto Traders Turn to Hyperliquid for Oil Bets Amid Iran Volatility
4 hours ago
Ripple Begins Buying Back Shares at $50 Billion Valuation: Bloomberg
5 hours ago
Grammarly Disables AI \\\'Expert Review\\\' After Backlash From Authors and Journalists
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
12 minutes ago
Robert Kiyosaki Warns Historic Market Crash Arriving as Blackrock Private Credit Time Bomb Ticks
avatar
avatarDecrypt
37 minutes ago
Crypto Traders Turn to Hyperliquid for Oil Bets Amid Iran Volatility
avatar
avatarbitcoin.com
1 hour ago
XRP ETFs Defy Price Collapse, Pull $1.4B Inflows as Goldman Sachs Leads Institutional Holdings
avatar
avatarcoindesk
1 hour ago
SEC, CFTC end years of rivalry with deal that will mean combined crypto oversight
avatar
avatarbitcoin.com
2 hours ago
What Happened in the Aave Oracle Incident? $26M Liquidations Explained
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink