The controversy ignited around Feb. 28, when OpenAI confirmed an agreement with the U.S. Department of Defense to deploy advanced AI systems, including ChatGPT technology, on classified networks.
The company framed the deal as lawful and tightly controlled, but critics saw something else entirely: a consumer-facing AI platform stepping deeper into military operations at a moment when public scrutiny of artificial intelligence is already running hot.
OpenAI said the agreement includes explicit guardrails, including bans on mass domestic surveillance of U.S. persons, autonomous weapons control, and high-stakes automated decision-making systems.
It also stressed technical constraints, including cloud-only deployments and retained control over safety systems, alongside compliance with U.S. legal frameworks such as the Fourth Amendment and Department of Defense rules governing human oversight of lethal force.
Still, the optics were not exactly subtle.
Within hours of the announcement, a grassroots boycott campaign under the banner #QuitGPT began circulating across social media, urging users to cancel subscriptions, delete the app, and migrate to competitors. The measurable backlash translated into sizable shifts in app behavior.

Screenshot from the website Quitgpt.org
According to app analytics data, U.S. ChatGPT uninstall rates jumped 295% day over day on Feb. 28, while downloads slipped 13% the next day and another 5% after that.
User sentiment took an even sharper turn in app reviews, where one-star ratings spiked 775% in a single day and continued climbing, while five-star reviews dropped by roughly half. Competitors benefited from the moment.
Anthropic’s Claude app recorded download increases between 37% and 51% during the same period, briefly overtaking ChatGPT in U.S. App Store rankings as users explored alternatives. Organizers of the boycott claimed millions of actions tied to the campaign, including cancellations and pledges, though exact figures vary depending on the source and how participation is defined.
OpenAI moved quickly to contain the fallout. Chief Executive Officer Sam Altman acknowledged shortcomings in how the deal was communicated, calling the rollout “opportunistic and sloppy,” and within days the company revised the agreement language.
The updated terms explicitly prohibited intentional domestic surveillance using AI systems and added stricter requirements for any intelligence agency involvement, including separate contractual layers. The company also announced plans to coordinate with other AI developers on shared safety frameworks, positioning the changes as a tightening rather than a retreat.
While the backlash cooled somewhat after the revisions, the episode left a mark, highlighting how quickly consumer sentiment can shift when AI crosses into sensitive territory. At the same time, OpenAI was making less visible but strategically significant moves behind the scenes.
In early March, the company reorganized its computing and infrastructure operations, splitting responsibilities into three focused groups covering data center design, commercial partnerships, and on-the-ground facility management. The restructuring reflects a broader shift in how OpenAI plans to scale its computing power.
Rather than aggressively building and owning massive data centers tied to its ambitious “Stargate” initiative, the company is leaning more heavily on leasing and partnerships with cloud providers. Microsoft’s Azure remains central to that strategy, while OpenAI has also expanded relationships with Oracle and Amazon Web Services as part of multiyear capacity agreements.
Earlier plans involving large-scale, jointly owned infrastructure projects have been scaled back or reworked, as the financial and logistical realities of building AI supercomputing capacity at scale become harder to ignore. Instead, OpenAI is focusing on controlling key elements such as custom hardware and chips, while outsourcing the physical infrastructure layer to established hyperscalers.
The two developments — one public and contentious, the other operational and pragmatic — are not directly linked, but together they sketch a company moving quickly on multiple fronts, sometimes faster than its messaging can keep up.
For OpenAI, the challenge now is less about whether it can build powerful systems and more about how it manages the consequences of deploying them in places where the stakes are anything but theoretical.
- Why did users boycott ChatGPT in the U.S.?
Users reacted to OpenAI’s agreement to deploy AI on classified military networks, raising ethical concerns about surveillance and defense use. - Did ChatGPT usage decline after the controversy?
Yes, uninstall rates spiked and downloads fell temporarily, while negative app reviews surged sharply. - What changes did OpenAI make to its Pentagon deal?
The company added explicit bans on domestic surveillance and stricter rules for intelligence agency involvement. - Why is OpenAI shifting to cloud infrastructure partners?
Rising costs and scale challenges are pushing the company to lease computing power instead of building massive data centers.
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。