top of page

The Anthropic Feud: Can the Pentagon Blacklist Ethics?

  • Writer: Chloé Bissonnette
    Chloé Bissonnette
  • Mar 20
  • 4 min read

Last year, the United States Department of Defence (DoD) announced it wanted to begin integrating artificial intelligence into its military force. It called for AI companies to apply to work in collaboration with the Pentagon. Google Gemini, xAI, OpenAI, and Anthropic were amongst the strongest competitors, with Anthropic coming out on top due to its seamless integration within the U.S. military’s existing systems


This month, the world saw highly skilled artificial intelligence (AI) used by the United States’ military in Iran. In a worldwide arms race to incorporate AI into national defence, the debacle between Anthropic and the Pentagon has served as a reminder of the several ethical and legal concerns that arise from the incorporation of AI into military arsenals.


GettyImages-2260944681© AFP via Getty Images
GettyImages-2260944681© AFP via Getty Images

The Feud


Anthropic was founded by Dario Amodei and Daniela Amodei, two engineers who left OpenAI due to ethical concerns and diverging business models. It has since branded itself as a security, accountability, and ethics-oriented AI company. On February 14, 2026, news emerged that Anthropic’s AI model, Claude, was used by the United States military in the capture of Venezuelan President Nicolás Maduro on January 3, 2026. In response to the news, Anthropic was quick to question how its model was used in the intervention, to ensure it abided by its foundational principles. The DoD saw this line of questioning as an overreach by Anthropic’s employees, further exacerbating tensions over the contents of Anthropic’s incoming contract with the Pentagon. 


Following the capture of Maduro, the company had significant ethical concerns about the contents of its agreement with the United States government. In several public statements, Anthropic clarified that they wanted their contract with the Pentagon to state that their technology could not be used for two purposes: (1) the creation or use of fully autonomous weapons and (2) the mass surveillance of Americans. For Anthropic, AI is not reliable enough to be used in fully autonomous weapons at this time. They explain that the risks of putting the American military or civilians in peril are too high. The statement reads: “Without proper oversight, fully autonomous weapons cannot be relied upon to exercise the critical judgment that our highly trained, professional troops exhibit every day. They need to be deployed with proper guardrails.”As to the mass surveillance of Americans, the company explained that this use does not align with its fundamental democratic principles.


In response to the request, the U.S. government felt the company was stepping out of line by trying to limit how AI should be incorporated into military operations, feeling this should be left to the discretion of military leadership. On February 24, 2026, Defence Secretary Pete Hegseth gave Amodei until the end of the week to agree to the contract as is, threatening to label Anthropic a Supply Chain Risk to National Security or to invoke the Defence Production Act. The label of Supply Chain Risk, previously used against Huawei, forces all government contractors and agencies to cut ties with the company and can have significant repercussions for business. On the other hand, invoking the Defence Production Act would have identified Anthropic as necessary to National Security, thereby forcing them to comply with the United States government’s expectations. 


On Friday, February 27, tension rose as lawyers struggled to reach an agreement before the 5:01 P.M. deadline. Efforts were abandoned at 3:47 P.M. when President Trump posted on Truth Social saying:

Truth Social | @realDonaldTrump
Truth Social | @realDonaldTrump

A few hours later, Hegseth announced on X that Anthropic would be labelled a Supply Chain Risk to National Security, a label historically designed to protect a country from foreign adversaries, not from domestic opposition.


The Response


News quickly emerged that OpenAI had taken over Anthropic’s contract. In a statement, OpenAI explained that the agreement specifically prohibits the use of its technology for mass domestic surveillance. It vowed to respect three red lines: no use in mass domestic surveillance, no use for directing autonomous weapons, and no use in automated high-stakes decision-making. However, rather than including prohibitions on autonomous weapons in the agreement, OpenAI vowed to write these into its safety stack; code deployed via cloud with cleared OpenAI staff members. Though OpenAI firmly believes that its contract is more protective, employees of Anthropic have argued that writing into the stacks is insufficient, as the code can be changed quickly, meaning that the guardrails are flexible and temporary.


On March 9, Anthropic announced it would take legal action against the Pentagon for the use of the label "Supply Chain Risk to National Security". It is claiming that the label violates the company’s right to due process and its First Amendment right to free speech. In 2019, Huawei, a Chinese telecom manufacturer that was labelled a Supply Chain Risk due to spying concerns, attempted a similar legal action against the U.S. government. They explained that the consequences to Huawei’s reputation, competitiveness, and business in the United States were significant, and that the label was attributed without due process and thorough investigation. Huawei’s lawsuit was ultimately unsuccessful. 


The public Anthropic-Pentagon feud and the response from major Silicon Valley companies have engendered a lot of buzz. Google, Amazon, Apple, Microsoft, and other tech-forward companies have outwardly supported Anthropic in its lawsuit against the U.S. government. OpenAI has also stated that it does not believe that Anthropic should be labelled a Supply Chain Risk, and several employees have expressed concerns over their company’s new agreement with the Pentagon. This backlash is assumed to have pushed OpenAI CEO Sam Altman to reconsider some of the terms of its agreement with the U.S. government.


The rise of artificial intelligence has prompted international lawyers worldwide to lobby for the creation of a new treaty prohibiting lethal autonomous weapons systems (LAWS). They argue that protecting the Geneva Conventions, the principle of distinction, and the principle of proportionality requires guardrails on the use of AI for military purposes. With the legally fragile guardrails of private companies and national governments, and without a treaty on the usage of LAWS, accountability for war crimes becomes virtually impossible to assign. Thus, several questions arise: What will the relationship between AI and weapons look like? How can international humanitarian law be adapted for today’s reality? Will OpenAI respect its commitment to including firm guardrails within its stacks? And would Anthropic have agreed to its technology being used for lethal autonomous weapons if AI were more reliable?

Comments


STAY IN THE KNOW

Thanks for submitting!

bottom of page