qinbafrank|2月 28, 2026 03:20
Authropic has completely fallen out with the Pentagon, and OpenAI is trying to fill the gap. Let's talk about the logic behind this from a personal perspective. In the early morning, Trump issued a document ordering all US federal government agencies to immediately stop using the technical products of Anthropic, and gave the Ministry of Defense and other heavy use departments a six-month transition period to phase out. Immediately afterwards, the Pentagon announced that Anthropic would be listed as a national security supply chain risk, effectively putting this American company on a blacklist similar to that of foreign high-risk suppliers: any contractor, supplier, or partner who does business with the US military will be immediately prohibited from conducting any commercial activities with Anthropic.
1. Core cause
Anthropic has signed a contract worth approximately $200 million with the Pentagon, and Claude (especially the Claude Gov version) is currently the only cutting-edge large model approved to run on classified networks, primarily for intelligence analysis, operational planning, and more.
But Anthropic insists on writing two red lines into the contract:
1) Prohibition of large-scale surveillance of US citizens
2) Prohibited for use in fully autonomous weapon systems (i.e. kill chains without human involvement in the final firing decision)
The Pentagon's request to modify it to 'any lawful use' is equivalent to removing these two red lines. Anthropic CEO Dario Amodei publicly stated that he "cannot agree with conscience" and would rather lose the contract than compromise. Then, one hour before the deadline, Trump first announced that it would completely stop working.
2. The underlying logic of this conflict
From a personal perspective, it is not just a disagreement between individual commercial companies and government agencies over cooperation terms, but also a white hot manifestation of ethical conflicts in AI military. The core of this conflict lies in private AI companies attempting to maintain ethical bottom lines through contractual terms (such as human oversight and privacy protection), while the government emphasizes that "all legitimate uses" take precedence over any moral constraints.
This conflict highlights the prioritization of national security needs over private ethical clauses, which could set a precedent and lead to the systematic removal of ethical restrictions in future AI contracts, such as the "human in the loop" requirement for lethal autonomous weapon systems. The result is an increase in the "unrestricted" application of AI in the military field, which increases the risk of misjudgment (such as autonomous weapons losing control due to unreliable AI models)
3. How will authropic proceed in the future? And its profound impact?
Probably, he will sue the Trump government and the Pentagon in the name of violating the National Defense Production Law through legal means.
Anthropic has stated that it will sue, and if it loses this lawsuit, it will establish the government's strong control over AI suppliers;
If successful, it may strengthen the company's bargaining power and push for the US Congress to enact AI military regulations (such as adjustments to the Defense Production Act).
In the long run, this will affect AI alignment research - if military models are forced to 'de moralize', it may amplify system risks such as unpredictable model behavior or abuse.
The conflict is essentially a collision between "technological autonomy" and "national sovereignty", which may reshape AI philosophy in the long run - if ethics give way to practicality, AI will become a catalyst for "instrumentalized" warfare, increasing global instability (such as AI assisted cyber warfare or intelligence abuse).
On the contrary, if more companies follow Anthropic's example, it may give rise to an AI ecosystem that prioritizes ethics, similar to the international consensus on nuclear non-proliferation.
4. Is it just an AI military ethical conflict? Actually, not so.
According to Fortune magazine this morning, Sam Altman reported on the progress of negotiations to employees at the all staff meeting on Friday, and the contract has not yet been signed. The Pentagon has accepted the security red line proposed by OpenAI: not for independent weapons, not for large-scale domestic monitoring, not for making key decisions, and these conditions are almost the same as those insisted by Anthropic. The Pentagon lists Authropic as a national security supply chain risk. But if OpenAI were to propose the same conditions, the Ministry of National Defense would gladly accept them. Why?
Reason 1: Different negotiation attitudes and flexibility
Anthropic insists on including the red line in the contract as a mandatory legal clause, rejecting the fallback language of "all legitimate uses", believing that this is equivalent to opening the door for future abuse. Anthropic regards this as "irreconcilable with conscience" and directly gives an ultimatum, leading to the breakdown of negotiations.
OpenAI also adheres to the same red lines, but the negotiations are more pragmatic and flexible: allowing the government to explicitly include these red lines as exclusions in the contract, but at the same time emphasizing that OpenAI retains control over its own "security stack" - including technical protection, policy constraints, human supervision layers, and that the government cannot forcibly modify or bypass the model when it rejects tasks. Agree to limit deployment to cloud environments and not enter edge systems (such as drones, aircraft, and other hardware that may be directly used for autonomous weapons).
The overall attitude of Openai is to "maintain the bottom line under the premise of cooperation", rather than "the bottom line is above all cooperation".
Result: The Department of Defense has shown significant concessions to OpenAI's proposal, with Axios reporting that "the Pentagon has agreed to OpenAI's security rules for deployment in classified environments." Although the contract has not yet been formally signed, negotiations have entered the stage of "potential agreements emerging.
And deeper reasons: political and relational factors
OpenAI and its executives (including Altman and Brockman) donated tens of millions of dollars to Trump related political action committees, and they are also big money owners of the Republican Party.
Dario, the founder of Anthropic, is a Democratic Party sponsor and publicly criticizes Trump. In the eyes of Trump, this is the representative of the radical left wing, so Trump sent a paper to mention the fight against the radical left wing companies and the defense of military rights.
If OpenAI is ultimately signed, it will strengthen the precedent for cooperative companies to benefit: ethical red lines can exist, but they must be presented in a way that the government can accept. On the contrary, rigid companies may be marginalized. At the same time, the political stance demonstrated by the company and the creators has also become a very important consideration.
5. What will happen next?
Observe the degree of compromise made by Authenticic during this six-month transition period, as well as the progress of legal proceedings. Of course, it is possible that the entire legal litigation process will last for a long time, and it is difficult to determine within six months.
If authropic can learn openai compromise and be more flexible, it is still possible to return to the defense system. After all, Claude Gov is currently the only cutting-edge model that can officially operate on Secret level classified networks, and suddenly unplugging it would create a capability gap. The adoption of other AI models by the Ministry of National Defense also requires adaptation to confidential environments, which takes time.
This article is sponsored by @ bitget_zh, titled 'Bitget Buying US Stocks: Instant Entry, Smooth Trading'
Share To
Timeline
HotFlash
APP
X
Telegram
CopyLink