California Enacts First US Rules for AI 'Companion' Chatbots

CN
Decrypt
Follow
6 hours ago

California has become the first state to set explicit guardrails for “companion” chatbots, AI programs that mimic friendship or intimacy. Governor Gavin Newsom on Monday signed Senate Bill 243, which requires chatbots to identify themselves as artificial, restrict conversations about sex and self-harm with minors, and report instances of detected suicidal ideation to the state’s Office of Suicide Prevention.


The law, authored by State Sen. Steve Padilla (D-San Diego), marks a new front in AI oversight—focusing less on model architecture or data bias and more on the emotional interface between humans and machines. It compels companies to issue regular reminders that users are talking to software, adopt protocols for responding to signs of self-harm, and maintain age-appropriate content filters.





The final bill is narrower than the one Padilla first introduced. Earlier versions called for third-party audits and applied to all users, not only minors; those provisions were dropped amid industry pressure.


Too weak to do any good?


Several advocacy groups said the final version of the bill was too weak to make a difference. Common Sense Media and the Tech Oversight Project both withdrew their support after lawmakers stripped out provisions for third-party audits and broader enforcement. In a statement to Tech Policy Press, one advocate said the revised bill risked becoming “an empty gesture rather than meaningful policy.”


Newsom defended the law as a necessary guardrail for emerging technology. “Emerging technology like chatbots and social media can inspire, educate and connect—but without real guardrails, technology can also exploit, mislead, and endanger our kids,” he said in a statement. “We can continue to lead in AI and technology, but we must do it responsibly—protecting our children every step of the way.”


SB 243 accompanies a broader suite of bills signed in recent weeks, including SB 53, which mandates that large AI developers publicly disclose their safety and risk-mitigation strategies. Together, they place California at the forefront of state-level AI governance.


But the new chatbot rules may prove tricky in practice. Developers warn that overly broad liability could prompt companies to restrict legitimate conversations about mental health or sexuality out of caution, depriving users, especially isolated teens, of valuable support.


Enforcement, too, could be difficult: a global chatbot company may struggle to verify who qualifies as a California minor or to monitor millions of daily exchanges. And as with many California firsts, there’s the risk that well-intentioned regulation ends up exported nationwide before anyone knows if it actually works.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink