Author:Geek Friend
On March 7, 2026, when I saw the news of Caitlin Kalinowski's resignation, my first reaction was not shock, but — "Finally, someone is speaking with action."
Kalinowski is the head of hardware and robotics engineering at OpenAI, having just joined in November 2024, and less than a year and a half later, she chooses to leave.
Her reasons are direct and heavy — she finds it unacceptable for OpenAI to sign contracts with the U.S. Department of Defense, which could lead to domestic surveillance and applications of autonomous weapons.
This is not an ordinary talent loss. This is someone who personally participated in building AI bodies, telling the world through her resignation: she is unwilling to take responsibility for what the creations she made might do.
To understand Kalinowski's departure, we must go back to what happened about a week earlier.
On February 28, Sam Altman announced that OpenAI had reached an agreement with the U.S. Department of Defense, allowing the Pentagon to use OpenAI's AI models on its classified networks. Once the news broke, public opinion exploded.
Interestingly, the "reference point" for this contract was competitor Anthropic.
Just recently, Anthropic had refused a similar collaboration proposed by the Pentagon, insisting on stricter ethical safeguards in the contract; as a result, Defense Secretary Pete Hegseth publicly criticized Anthropic on X, calling its actions "a master class in arrogance and betrayal," echoing the Trump administration's order to cease cooperation with Anthropic.
OpenAI subsequently took over this deal.
User reactions were quite intense. On February 28, the number of ChatGPT uninstalls surged by 295% compared to the previous day, and the #QuitGPT movement rapidly swept social media, with supporters of the digital boycott exceeding 2.5 million in three days. Claude took advantage of this trend to surpass ChatGPT, becoming the most downloaded app in the U.S. and reaching the top of the Apple App Store's free app list.
Under pressure, Altman publicly admitted on March 3 that he "should not have rushed to sign this contract," stating that "it just looked opportunistic and hasty," and announced a revision of the contract wording to clearly state that "AI systems should not be intentionally used for domestic surveillance of U.S. personnel and citizens."
However, the word "intentionally" is itself a loophole. Lawyers from the Electronic Frontier Foundation pointed out that intelligence and law enforcement agencies often rely on "incidental" or "commercially purchased" data to circumvent stronger privacy protections — adding an "intentionally" does not equate to real restrictions.
Kalinowski's resignation occurred against this backdrop.
01 What She Saw Was More Specific Than We Imagine
While most people were still discussing whether "OpenAI is compromising with the government," Kalinowski was actually facing a more specific and brutal question — her team was building robots.
Hardware and robotics engineering is not an abstract job of writing code and adjusting parameters. It involves giving AI hands, feet, and eyes. When OpenAI's collaboration with the Department of Defense extended from "model usage" to future possible "embodied AI military applications," the nature of Kalinowski's work changed.
Researchers in the field of autonomous weapons had long been warning of this day’s arrival.
Current U.S. Department of Defense policies do not require that autonomous weapons obtain human approval before using force. In other words, the contract signed by OpenAI technically does not prevent its models from becoming part of a system that "allows GPT to decide to kill someone".
This is not alarmism. Jessica Tillipman, a lecturer on government procurement law at Georgetown University, analyzed the revised contract from OpenAI, clearly pointing out that the wording "does not give OpenAI a similar freedom to prohibit legitimate government use as Anthropic," it merely states that the Pentagon cannot use OpenAI technology in violation of "existing laws and policies" — yet existing laws have significant gaps in regulating autonomous weapons.
Experts in governance from Oxford University have similar judgments, believing that OpenAI's agreement "is unlikely to address" the structural gaps in governance left by AI-driven domestic surveillance and autonomous weapons systems.
Kalinowski's departure is her personal response to this judgment.
02 What Is Happening Inside OpenAI
Kalinowski is not the first person to leave, and she is likely not the last.
Data indicates that the resignation rate of OpenAI's ethics and AI safety teams has reached as high as 37%, with most citing reasons such as "inconsistent with company values" or "unable to accept AI for military use." Research scientist Aidan McLaughlin wrote in an internal post, "Personally, I believe this deal is not worth it."
It is worth noting the timing of this wave of resignations — it coincides with OpenAI rapidly expanding its commercial footprint. Just before and after the defense contract controversy, the company announced an extension of its existing $38 billion contract with AWS to $100 billion over eight years; at the same time, it adjusted its publicly disclosed spending goals, expecting total revenue to exceed $280 billion by 2030.
Commercial acceleration and continuous departures from the safety team. This disconnect is the most important axis to understand OpenAI's current situation.
A company’s values ultimately manifest in who it retains and who it does not. When those most concerned with "how this technology will be used" begin to leave in succession, it is not difficult to infer the direction in which the remaining organizational structure will slide.
Anthropic chose a different path in this game — rejecting the contract, enduring the Pentagon's wrath, but gaining the trust of a large number of users. During that time, Claude's downloads rose against the trend, proving that "principled refusal" does not necessarily mean a losing strategy in business.
But Anthropic also paid a price — it was pushed out by the government, at least for now.
This is the real dilemma: no choice is perfect.
Refusal means potentially losing influence and being excluded from rule-making. Acceptance means endorsing actions that one cannot fully control with one’s own technology.
Kalinowski's answer is a third way — to leave.
This is the most honest thing she can do.
03 The Battle for the Soul of Silicon Valley Has Just Begun
If we pull back the perspective a bit, the significance of this incident far exceeds one person's resignation.
The combination of AI and the military is a question that the entire industry will eventually have to face. The Pentagon has budgets, needs, and technical integration capabilities; it will not stop throwing olive branches at AI companies. And AI companies — whether it’s OpenAI, pursuing AGI, Anthropic, emphasizing safety, or other players — will inevitably have to provide their own answers to this question someday.
Altman's strategy is to attempt to delineate a bottom line through contractual wording while accepting commercial realities. But as several legal and governance experts have pointed out, that wording resembles more of a public relations protection than a hard technical constraint.
The more fundamental issue is that when AI models are deployed into classified networks, when they begin to participate in military decisions, the outside world has no capacity to verify whether those "guarantees" are truly being enforced.
The absence of transparency is, in itself, the greatest risk.
Kalinowski spent less than a year and a half at OpenAI yet chose to leave at this juncture. She did not publicly issue a lengthy statement or name any individuals in criticism; she simply drew her boundaries through action.
In a sense, this is more powerful than any policy article.
AI hardware and robotics engineering was originally one of the most exciting frontiers in Silicon Valley. When Kalinowski left, she took away not just a resume, but also a question, leaving it for all those still in this industry —
how far are you willing to take responsibility for what you have created?
免责声明:本文章仅代表作者个人观点,不代表本平台的立场和观点。本文章仅供信息分享,不构成对任何人的任何投资建议。用户与作者之间的任何争议,与本平台无关。如网页中刊载的文章或图片涉及侵权,请提供相关的权利证明和身份证明发送邮件到support@aicoin.com,本平台相关工作人员将会进行核查。