Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

OpenAI Sued Over Failure to Warn Police Before Tumbler Ridge Mass Shooting

CN
Decrypt
Follow
3 hours ago
AI summarizes in 5 seconds.

OpenAI is facing a new lawsuit alleging the company failed to warn police after ChatGPT was linked to one of Canada’s deadliest school shootings. The lawsuit adds to growing scrutiny of how AI companies respond to signs of distress and real-world violence.


According to a report by Ars Technica, the lawsuit was filed on Wednesday in federal court in Northern California by an unnamed 12-year-old minor identified as M.G. and her mother, Cia Edmonds, against OpenAI CEO Sam Altman and several OpenAI entities.


The suit accuses the company of negligence, failing to warn authorities, product liability, and helping to enable the mass shooting.


“Sam Altman and his leadership team knew what silence meant for the citizens of Tumbler Ridge,” the complaint states. “They were focused on what disclosure meant for themselves. Warning the RCMP would set a precedent: OpenAI would be compelled to notify authorities every time its safety team identified a user planning real-world violence.”





The case stems from a mass shooting in Tumbler Ridge, British Columbia, in February. Authorities say 18-year-old Jesse Van Rootselaar killed her mother and 11-year-old stepbrother at home before going to Tumbler Ridge Secondary School and opening fire. Five children and one educator were killed at the school before Van Rootselaar died by suicide.


Among the injured was M.G., who was shot three times and remains hospitalized with catastrophic brain injuries. The complaint says she is awake and aware, but cannot move or speak.


Jay Edelson, founder and CEO of Edelson PC, the attorneys representing several of the families suing OpenAI, said the company’s own internal systems identified the risk, and multiple employees pushed for intervention.


“OpenAI’s own system flagged that the shooter was engaged in communications about planned violence,” Edelson told Decrypt. “Twelve people on their safety team were jumping up and down, saying that OpenAI needed to alert authorities. And, although Sam Altman’s response has been weak, even he was forced to admit last week that they should have called the authorities.”


Edelson said the families and the Tumbler Ridge community are demanding more transparency and accountability from the company.


“OpenAI should stop hiding critical information from the families, and they should not keep a dangerous product on the market, which is bound to lead to more deaths,” Edelson said. “Finally, they need to think long and hard about how they can maintain a leadership team that cares more about sprinting to an IPO than human lives.”


According to the lawsuit, OpenAI’s automated systems flagged Van Rootselaar’s ChatGPT account in June 2025 for conversations involving gun violence and planning. Members of OpenAI’s specialized safety team reviewed the chats and determined the user posed a credible and specific threat, recommending that the Royal Canadian Mounted Police be notified.


The lawsuit alleges OpenAI leaders overruled internal recommendations to alert authorities, deactivated Van Rootselaar’s account without notifying police, and allowed her to return by creating a new account with a different email address.


Plaintiffs claim ChatGPT deepened the shooter’s violent fixation through features like memory, conversational continuity, and its willingness to engage in discussions about violence, while OpenAI weakened safeguards in 2024 by moving away from outright refusals in conversations involving imminent harm.


Last week, Altman publicly apologized to the Tumbler Ridge community for the company’s failure to alert police. In a letter first reported by Canadian outlet Tumbler Ridgelines, Altman acknowledged OpenAI should have reported the account after banning it in June 2025 for activity related to violent conduct.


"The events in Tumbler Ridge are a tragedy. We have a zero-tolerance policy for using our tools to assist in committing violence,” an OpenAI spokesperson told Decrypt. “As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators."


OpenAI is already facing other lawsuits tied to ChatGPT’s alleged role in real-world harm, including a wrongful death case filed in December accusing OpenAI and Microsoft of “designing and distributing a defective product” in the form of the now-depreciated GPT-4o model. The lawsuit alleges that ChatGPT reinforced the paranoid beliefs of Stein-Erik Soelberg before he killed his mother, Suzanne Adams, and then himself at their home in Greenwich, Connecticut—marking the first lawsuit to link an AI chatbot to a homicide.


“This is the first case seeking to hold OpenAI accountable for causing violence to a third-party,” J. Eli Wade-Scott, managing partner of Edelson PC, told Decrypt at the time. “We're urging law enforcement to start thinking about when tragedies like this occur, what that user was saying to ChatGPT, and what ChatGPT was telling them to do.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

3 seconds ago
Celsius Founder Alex Mashinsky Banned From Crypto Industry in $10 Million FTC Settlement
2 minutes ago
This AI Was Trained Only on Pre-1930 Text. We Asked It About Hitler, Stocks, and the Future
1 hour ago
White House Weighs Reinstating Anthropic for Federal Use Amid Pentagon Fight: Report
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarDecrypt
3 seconds ago
Celsius Founder Alex Mashinsky Banned From Crypto Industry in $10 Million FTC Settlement
avatar
avatarDecrypt
2 minutes ago
This AI Was Trained Only on Pre-1930 Text. We Asked It About Hitler, Stocks, and the Future
avatar
avatarbitcoin.com
7 minutes ago
Unprecedented US, China, Dubai Crypto Scam Crackdown Nets 276 Arrests
avatar
avatarcoindesk
23 minutes ago
Fed chair Jerome Powell says he will stay on as Govenor after term amid legal pressure
avatar
avatarU.today
50 minutes ago
Ripple Prime Adds BTC Options via Bullish
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink