Deepfake AI ‘Undress’ Porn Sites Sued in California Court

CN
Decrypt
Follow
10 months ago

The city of San Francisco has filed a sweeping lawsuit against 18 websites and apps that generate unauthorized, deepfake nudes of unsuspecting victims on Thursday.


The complaint—published with the plaintiffs’ service names redacted—targets the “proliferation of websites and apps that offer to ‘undress’ or ‘nudify’ women and girls.” It asserts that collectively, the sites have been visited over 200 million times in the first six months of 2024.


“This investigation has taken us to the darkest corners of the internet, and I am absolutely horrified for the women and girls who have had to endure this exploitation,” said San Francisco City Attorney David Chiu in announcing the lawsuit. “Generative AI has enormous promise, but as with all new technologies, there are unintended consequences and criminals seeking to exploit the new technology.



“This is not innovation—this is sexual abuse,” Chiu added.


Although celebrities like Taylor Swift have been frequent targets of such image generation, he pointed to recent cases that have surfaced in the news, involving California middle school students.


“These images, which are virtually indistinguishable from real photographs, are used to extort, bully, threaten, and humiliate women and girls,” the city announcement said.


The rapid spread of what is known as non-consensual intimate imagery, or NCII, has prompted efforts by governments and organizations worldwide to curtail the practice.


“Victims have little to no recourse, as they face significant obstacles to remove these images once they have been disseminated,” the complaint says. “They are left with profound psychological, emotional, economic, and reputational harms, and without control and autonomy over their bodies and images.”


Even more problematic, Chiu notes, is that some sites “allow users to create child pornography.”


The use of AI to generate child sexual abuse material, or CSAM, is especially problematic, as it severely hinders efforts to identify and protect real victims. The Internet Watch Foundation, which tracks the issue, said known pedophile groups are already embracing the technology, and that AI-generated CSAM could “overwhelm” the internet.


A Louisiana state law specifically banning CSAM created with AI went into effect this month.


Although major tech companies have pledged to prioritize child safety as they develop AI, such images have already found their way into AI datasets, according to researchers at Stanford University.


The lawsuit calls for the services to pay $2,500 for each violation and cease operations, and also demands domain name registrars, webhosts, and payment processors to stop facilitating the creation deepfakes.


免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

ad
出入金首选欧易,注册立返20%
Ad
Share To
APP

X

Telegram

Facebook

Reddit

CopyLink