OpenAI researcher resigns and accuses: ChatGPT sells advertisements, who will protect your privacy?

CN
3 hours ago
ChatGPT has accumulated an unprecedented archive of human candid conversations, which could easily evolve into a tool for psychological manipulation using users' private information once an advertising model is introduced.

Author: Zoë Hitzig

Translation: Deep Tide TechFlow

Deep Tide Introduction: As OpenAI announced the testing of advertisements in ChatGPT, its former researcher Zoë Hitzig angrily resigned and wrote to expose the internal shift in the company's values. The author points out that ChatGPT has accumulated an unprecedented archive of human candid conversations, which could easily evolve into a tool for psychological manipulation using users' private information once an advertising model is introduced. She warns that OpenAI is repeating Facebook’s old path of “commit first, contradict later,” placing user engagement above safety. This article deeply explores the ethical dilemmas of AI financing and proposes alternatives such as cross-subsidization, independent regulation, and data trusts, calling on the industry to be wary of the profit motives behind the “chatbot psychosis.”

The full text is as follows:

This week, OpenAI began testing advertisements on ChatGPT. I also resigned from the company. Previously, I had worked here as a researcher for two years, helping to build AI models and their pricing models, and guiding early safety policies before industry standards had solidified.

I once believed that I could help those building AI stay ahead of the problems it might create. But what happened this week confirmed the reality I had gradually perceived: OpenAI seems to have stopped questioning the very issues I initially joined to help answer.

I do not believe that advertising is unethical or against morals. Running AI is extremely expensive, and advertising could be a critical source of revenue. But I hold deep reservations about OpenAI's strategy.

For years, ChatGPT users have created an unprecedented archive of human candid conversations, partly because people believe they are talking to an entity with no hidden agenda. Users are interacting with an adaptive, conversational voice and revealing their most intimate thoughts to it. People tell the chatbot their fears about health, relationship issues, beliefs about God and the afterlife. An advertising model based on this archive is highly likely to manipulate users in ways we currently lack the tools to understand (let alone prevent).

Many frame the AI funding issue as a “lesser of two evils” choice: either restrict access to this transformative technology to a few wealthy people who can afford to pay; or accept advertising, even if it means exploiting the deepest fears and desires of users to sell products. I think this is a false dichotomy. Tech companies can seek other solutions that keep these tools widely available while limiting the motivations to surveil, profile, and manipulate their users.

OpenAI states that it will adhere to the principles of advertising on ChatGPT: ads will be clearly labeled, appear at the bottom of responses, and will not affect the content of replies. I believe the first version of ads may follow these principles. But I worry that subsequent iterations will not, as the company is building a powerful economic engine that will have strong motivations to overturn its own rules. (The New York Times has sued OpenAI over copyright infringement concerning dynamically generated news content related to AI systems. OpenAI has denied these claims.)

In its early days, Facebook promised users that they would control their data and could vote on policy changes. But these promises later unraveled. The company abolished the system of public voting on policy. Changes to privacy that claimed to give users more control over their data were later found by the Federal Trade Commission (FTC) to be counterproductive and actually made private information more public. All of this happened gradually under the pressure of an advertising model that prioritized user engagement above all.

The erosion of OpenAI's own principles in pursuit of maximum engagement may have already begun. Optimizing user engagement merely to generate more advertising revenue violates company principles, yet according to reports, the company has already been optimizing for daily active users, likely by encouraging the model to be more agreeable and flattering. This optimization may make users feel more reliant on AI support in their lives. We have seen the consequences of over-reliance, including cases of “chatbot psychosis” recorded by psychiatrists, and allegations that ChatGPT has intensified suicidal thoughts in certain users alleged.

Despite this, advertising revenue does help ensure that the most powerful AI tools do not default to only belonging to those who can afford them. Admittedly, Anthropic claims it will never run ads on Claude, but Claude's weekly active users are only a small fraction of ChatGPT's 800 million users; its revenue strategy is entirely different. Furthermore, the top subscription fees for ChatGPT, Gemini, and Claude now reach up to $200 to $250 per month—over ten times more than Netflix's standard subscription fee for a single piece of software.

So the real question is not whether there will be ads, but whether we can design structures that avoid excluding ordinary users while also avoiding potentially manipulating them as consumers. I believe we can do this.

One approach is explicit cross-subsidization—using profits from one service or customer group to offset losses for another. If a business uses AI on a large scale to perform high-value labor previously done by human workers (such as a real estate platform using AI to write property listings or valuation reports), then it should also pay an additional fee to subsidize free or low-cost access for others.

This approach draws on how we handle basic infrastructure. The Federal Communications Commission (FCC) requires telecom operators to contribute to a fund to keep phone and broadband costs affordable for rural areas and low-income households. Many states add a public benefit fee to utility bills to provide low-income assistance.

The second option is to accept advertising but pair it with genuine governance—not just posting a blog filled with principles, but a binding structure with independent oversight responsible for regulating the use of personal data. There are some precedents in this area. Germany's co-determination law requires large companies like Siemens and Volkswagen to allocate up to half of their supervisory board seats to workers, indicating that formal stakeholder representation can be enforced within private companies. Meta has also been bound to follow its Oversight Board content moderation rulings, which is an independent body composed of outside experts (though its efficacy has faced criticism).

The AI industry needs a combination of these methods—a committee that includes independent experts as well as representatives of the affected public, deciding which conversation data can be used for targeted advertising, what constitutes significant policy changes, and informing users what has binding power.

The third method involves placing user data under independent control through trusts or cooperatives, with a legal obligation to act in the interests of users. For example, the Swiss cooperative MIDATA allows members to store their health data on an encrypted platform and decide on a case-by-case basis whether to share it with researchers. MIDATA’s members manage its policies at general assembly meetings and have an ethics committee elected by them that reviews research access requests.

None of these options are easy. But we still have time to refine them to avoid the two outcomes I fear most: a technology that manipulates the public using it without charge, or another technology that only serves a few elite who can afford it.

免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。

Share To
APP

X

Telegram

Facebook

Reddit

CopyLink