Charts
DataOn-chain
VIP
Market Cap
API
Rankings
CoinOSNew
CoinClaw🦞
Language
  • 简体中文
  • 繁体中文
  • English
Leader in global market data applications, committed to providing valuable information more efficiently.

Features

  • Real-time Data
  • Special Features
  • AI Grid

Services

  • News
  • Open Data(API)
  • Institutional Services

Downloads

  • Desktop
  • Android
  • iOS

Contact Us

  • Chat Room
  • Business Email
  • Official Email
  • Official Verification

Join Community

  • Telegram
  • Twitter
  • Discord

© Copyright 2013-2026. All rights reserved.

简体繁體English
|Legacy

OpenAI Pushes Ahead With ChatGPT Erotica Mode Despite 'Sexy Suicide Coach' Warning: WSJ

CN
Decrypt
Follow
3 hours ago
AI summarizes in 5 seconds.

Sam Altman wants ChatGPT to talk dirty. His firm's advisers want him to stop, a report claims.


According to a Wall Street Journal report, OpenAI's Expert Council on Well-Being and AI made its stance clear in January: The company's plan to allow erotic conversations in ChatGPT was a bad idea. One council member, citing users who took their own lives after forming intense emotional bonds with the chatbot, reportedly warned that OpenAI risked creating a "sexy suicide coach."


But OpenAI apparently didn't flinch, and told the council it was delaying its launch, but not stopping it.


The plan, which Altman first floated publicly in October on X, would let verified adults use ChatGPT for text-based erotic conversations—what the company's spokeswoman described to the WSJ as "smut rather than pornography." No erotic images, no voice, and no video, per the WSJ report. Just text.





That distinction hasn't calmed critics inside or outside the company. OpenAI has already been criticized even by former staff members like security researcher Jan Leike, for steering away from strict safety policies in exchange for “shiny products,” some of which were being configured to boost engagement with some users replacing real-world relationships with the chatbot.


The technical problems are just as thorny. OpenAI's age-prediction system—the gatekeeper meant to keep minors from triggering adult chats—was at one point misclassifying teenagers as adults roughly 12% of the time, the WSJ reports. Right now, ChatGPT has around 900 million active users.



Source: OpenAI

That 12% error rate was the number that killed the December launch, and the Q1 2026 one after it. Fidji Simo, OpenAI's CEO of applications, acknowledged the delay during a December briefing, citing ongoing work to perfect the age verification system.


At the time, Decrypt reported that over 3,000 users had already signed a Change.org petition demanding the launch of the feature, frustrated that ChatGPT was blocking even discussions of "kissing and non-sexual physical intimacy."


The council's fury in January wasn't only about the content. Altman's October X post had blindsided his own team—he published it just hours after OpenAI announced the well-being council, a body explicitly tasked with defining "what healthy interactions with AI should look like for all ages." The timing was, at minimum, a contradiction.


OpenAI assembled the eight-member Expert Council last October, pulling in researchers from Harvard, Stanford, and Oxford. Their role was to advise the company on the mental health impacts of its products. Their actual influence on company decisions, based on January's meeting, appears to have been minimal at best.


"This seems part of the usual pattern of move fast, break things, and try to fix some things after they get embarrassing," an AlgorithmWatch spokesperson told Decrypt when the council was announced.


The competitive pressure on OpenAI is real. Grok, from Elon Musk’s xAI, already markets AI companions. Character.AI built its user base on AI romance before facing lawsuits over teen safety—including the case of 14-year-old Sewell Setzer, who died by suicide after explicit chatbot exchanges. Open-source models run locally without any corporate guardrails. OpenAI has, by far, more liability exposure than anyone in the room given its user base.


Altman has framed the content ban as an overreach—"We aren't the elected moral police of the world," he wrote on X in October.


But his own advisers have made their position unambiguous, his engineers can't yet build an age filter that works, and the launch date keeps moving. Treating adults like adults, it turns out, is harder than just sending an X post.


Decrypt reached out to OpenAI for comment on the Wall Street Journal report's claims and will update this story if we receive a response.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

全域 AI 入口,限量瓜分万元礼包
广告
|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Selected Articles by Decrypt

34 minutes ago
Did ChatGPT Really Cure a Dog\\\'s Cancer? It\\\'s Complicated
1 hour ago
Pokémon Go Players Helped Map the World—Now That Data Is Training Delivery Robots
1 hour ago
Man Alleges Wife Stole $172 Million in Bitcoin After \\\'Covertly Recording\\\' Him
View More

Table of Contents

|
|
APP
Windows
Mac
Share To

X

Telegram

Facebook

Reddit

CopyLink

Related Articles

avatar
avatarbitcoin.com
22 minutes ago
Opensea Kicks SEA Token Down the Road, Bets Users Will Stay for Refunds and 0% Fees
avatar
avatarDecrypt
34 minutes ago
Did ChatGPT Really Cure a Dog\\\'s Cancer? It\\\'s Complicated
avatar
avatarDecrypt
1 hour ago
Pokémon Go Players Helped Map the World—Now That Data Is Training Delivery Robots
avatar
avatarcoindesk
1 hour ago
AI-linked crypto tokens surge as Nvidia\\\'s Jensen Huang touts agentic future
avatar
avatarcoindesk
1 hour ago
Man accuses wife of using CCTV cameras to steal $172 million bitcoin from his hardware wallet
APP
Windows
Mac

X

Telegram

Facebook

Reddit

CopyLink