The U.S. Department of Defense is considering severing its relationship with artificial intelligence company Anthropic amid a dispute over how its AI systems can be used by the military, according to Axios, citing a senior U.S. administration official.
The Pentagon has been pushing four major AI developers, Anthropic, OpenAI, Google and xAI, to allow their tools to be used for “all lawful purposes,” including weapons development, intelligence gathering and battlefield operations. Anthropic has resisted agreeing to such broad terms, prompting frustration within U.S. defence circles after months of negotiations.
An Anthropic spokesperson said the company has not discussed specific military operations with the Pentagon, and that talks have focused on usage policies, including hard limits on fully autonomous weapons and mass domestic surveillance. Those restrictions are key to the company’s ethical approach to AI deployment, which it says must include safeguards.
The Pentagon has not publicly commented on the report. Efforts to reach the department for immediate comment were not successful.
The dispute comes amid reports that Anthropic’s AI model, Claude, was used by the U.S. military in a recent operation to capture former Venezuelan President Nicolás Maduro, facilitated through a partnership with analytics firm Palantir Technologies. The Wall Street Journal first reported the use, though Anthropic declined to confirm details of specific missions.
Anthropic and the Pentagon are also negotiating access for AI systems on classified military networks, where companies would be asked to loosen typical usage restrictions. While other firms have shown varying degrees of flexibility, Anthropic remains one of the most resistant, according to sources familiar with the matter.
The evolving standoff reflects wider tensions between AI developers focused on ethical safeguards and U.S. defence priorities that seek maximum operational flexibility for cutting-edge technology.
