Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

OpenAI Pushes New ChatGPT Safety Features as Lawsuits Mount

CN
Decrypt
Follow
1 hour ago
AI summarizes in 5 seconds.

OpenAI on Thursday announced new safety features designed to help ChatGPT recognize signs of escalating risk across conversations as the company faces growing legal and political scrutiny over how its chatbot handles users in distress.


In a blog post, OpenAI said the updates improve ChatGPT’s ability to identify warning signs tied to suicide, self-harm, and potential violence by analyzing context that develops over time instead of treating each message separately.


“People come to ChatGPT every day to talk about what matters to them—from everyday questions to more personal or complex conversations,” the company wrote. “Across hundreds of millions of interactions, some of these conversations include people who are struggling or experiencing distress.”


According to OpenAI, ChatGPT now uses temporary “safety summaries,” which it described as narrowly scoped notes that capture relevant safety-related context from earlier conversations.





“In sensitive conversations, context can matter as much as a single message,” the company wrote. “A request that appears ordinary or ambiguous on its own may carry a very different meaning when viewed alongside earlier signs of distress or possible harmful intent.”


OpenAI said the summaries are short-term notes used only in serious situations, not to permanently remember users or personalize chats, and are used to spot signs that a conversation is becoming dangerous, avoid giving harmful information, de-escalate the situation, or guide users toward help.


“We focused this work on acute scenarios, including suicide, self-harm, and harm to others,” they wrote. “Working with mental health experts, we updated our model policies and training to improve ChatGPT’s ability to recognize warning signs that emerge over the course of a conversation and use that context to inform more careful responses.”


The announcement comes as OpenAI faces multiple lawsuits and investigations alleging ChatGPT failed to properly respond to dangerous conversations involving violence, emotional vulnerability, and risky behavior.


In April, Florida Attorney General James Uthmeier launched an investigation into OpenAI tied to concerns about child safety, self-harm, and the 2025 mass shooting at Florida State University. OpenAI is also facing a federal lawsuit alleging ChatGPT helped the suspected gunman carry out the attack.


On Tuesday, OpenAI and CEO Sam Altman were sued in California state court by the family of a 19-year-old student who died from an accidental overdose, with the lawsuit alleging ChatGPT encouraged dangerous drug use and advised on mixing substances.


OpenAI said helping ChatGPT recognize “risk that only becomes clear over time” remains an ongoing challenge; similar safety methods could eventually expand into other areas.


“Today, this work focuses on self-harm and harm-to-others scenarios. In the future, we may explore whether similar methods can help in other high-risk areas such as biology or cyber safety, with careful safeguards in place,” they wrote. “This remains an ongoing priority, and we will continue strengthening safeguards as our models and understanding evolve.”


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

2 hours ago
Apple Mac M5 System Exploited With Anthropic\\\'s Claude Mythos AI, Researchers Claim
2 hours ago
NFL All Day Stops Issuing NFTs as Dapper Labs Signals Future Plans With League
3 hours ago
Bitcoin Firm Strive Unveils Daily Dividend Payments for SATA Preferred Shares
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
49 minutes ago
Forward Industries Posts $585M Loss as Solana Treasury Swings Hit Earnings
avatar
avatarDecrypt
2 hours ago
Apple Mac M5 System Exploited With Anthropic\\\'s Claude Mythos AI, Researchers Claim
avatar
avatarDecrypt
2 hours ago
NFL All Day Stops Issuing NFTs as Dapper Labs Signals Future Plans With League
avatar
avatarbitcoin.com
2 hours ago
MTI Liquidators Face 9,441 Claims as $35.8M Estate Shrinks Before Payouts
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink