In sum – what to know:
- Hegseth warning: U.S. Secretary of War Pete Hegseth has issued an ultimatum to Anthropic CEO Dario Amodei to allow the military unfettered access to Claude.
- Amodei response: Amodei has insisted that Claude cannot be used to develop weapons that fire without human involvement or to assist in mass domestic surveillance of Americans.
- Impending deadline: Amodei has until 5:01 PM tomorrow to capitulate, or risk blacklisting and possible use of the Defense Production Act against his company.
Anthropic CEO Dario Amodei is currently locked in a high-stakes standoff with the Pentagon. The Dept. of War under Pete Hegseth has threatened to blacklist Anthropic and/or invoke the Defense Production Act to compel Anthropic CEO Dario Amodei to do what he does not want to do: have Claude potentially used to develop weapons that fire without human involvement or to assist in mass domestic surveillance of Americans.
Friction emerged after it was reported that Claude was used by the U.S. military operation to capture Venezuelan President Nicolás Maduro, during which 83 people were purportedly killed. Anthropic’s usage policy (as of September 2025) said Claude could not be used for surveillance, the development of weapons or inciting violence. Hegseth labels AI systems with ideological restrictions as as “woke AI.” As of two days ago, Anthropic’s usage policy seems to have been updated to remove some of that wording.
Interestingly, Alan Rozenshtain’s Lawfare article yesterday seemed to address the Defense Production Act threat. One expert who spoke off the reccord said “The the Defense Protection Act cannot compel development of a product that a firm does not make – in this case, a model with different weights that the DOD wants. But, compelling changes in contract terms is another matter.”
Indeed, this conflict draws into view the conundrum AI leaders involved in government contracts have if discretion about “lawful uses” and how their AI technologies are leveraged is solely the responsibility of the government, which may or may not be fully attuned to the emergent nature of the technology. In this case, Amodei stands to lose a $200 million contract, and a lot more, if he doesn’t capitulate, as have his rivals, OpenAI, Google, xAI, all of which have government contracts.
The debate about whether AI leaders should allow the U.S. to use their AI models “in all lawful” purposes is complex, as there is debate about what is or is not “lawful,” marking this moment a critical showdown between AI safety ethics and national security imperatives.
The Trump administration has said it does not want stringent AI regulations to stifle innovation and make it harder for the American AI industry to compete. As Trump’s Secretary of War, Hegseth has declared, “We will not employ AI models that won’t allow you to fight wars…We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.”
Hegseth issued an ultimatum that by 5:01 tomorrow, Amodei must give the U.S. military open access to its AI model. According to Axios interviews with defense officials before the Hegseth – Amodei meeting, Anthorpic said it was willing to adapt its usage policies for the Pentagon, but not to allow its model to be used for the mass surveillance of Americans or the development of weapons that fire without human involvement.
Around the same time, Amodei participated in a podcast with Nikhil Kamath in which he conveyed optimism and concern about where the understanding of AI models is today. He referred to the promise of “interpretability” through neural nets that can see in AI a correspondence between neurons and specific concepts, noting “we are starting to understand what the models do.” The operative word there is “starting” as he pointed out that AI is being trained in an emergent way. That means it will take some time to truly understand the neural circuits and the ways in which they are thinking so that we can ensure the models behave the way they are supposed to.
Amodei expressed concern about the level of public awareness, or rather, unawareness, about the fact “we are so close to the point of these models reaching human intelligence…and yet, there doesn’t seem to be a wider recognition in society about what’s to happen.” He referred to it as a “tsunami that is coming at us…on the horizon…and yet people are saying ‘it’s not actually a tsunami; it’s a trick of the light.”
But recent data from Pew Research Center, Gallup-Bentley University, and several public sector surveys show that people are more concerned than not. They want AI for the public good, but they are not convinced that’s what they are getting. In the Gallup-Bentley poll, only 2% of U.S. adults “fully” trust AI’s capability to make fair and unbiased decisions, while 29% trust it “somewhat.” Six in 10 Americans distrust AI somewhat (40%) or fully (20%), although trust rises notably among AI users (46% trust it somewhat or fully).
That is a stark contrast to the government perspective, which Amodei says is “pushing to accelerate AI as fast as possible,” without a commensurate realization of the risks or action to mitigate them. In sum, Amodei said “the technical work around controlling AI systems has gone maybe a little better than I expected, but societal awareness has gone a little worse than I expected.” But that societal awareness must be fed by the AI companies, and/or the government entities tasked to protect the public. Will this standoff push AI leaders to avoid the mistakes the tobacco and opioid industries made when it came to transparency regarding risks and possible societal harm?
Amodei wrote “Machines of Loving Grace,” which talks of a cybernetic future in which technology can free humanity from labor, but he wants government and society to apply critical thinking, which he refers to as “our last real edge” as the concentration of power in AI continues to grow.
Where is the Pentagon-Anthropic standoff now?
U.S. Secretary of Defense Pete Hegseth has demanded that by the 5:01 deadline, Amodei remove the ethical guardrails he considers to be “woke AI.” The Pentagon’s stance is that it’s the U.S. military, not Anthropic, that should decide what constitutes “lawful orders” so that Anthropic’s tools are used “legally” under the military’s definition.
Amodei, however, has said “The constitutional protections in our military structures depend on the idea that there are humans who would disobey illegal orders with fully autonomous weapons,” and that autonomous drones would not be able to make such a distinction. In a lengthy essay last month, he warned of the potential for abuse of the technologies, writing that “a powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.” He has also said “Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them.”
Friction emerged after news broke that Anthropic’s Claude model, through its ties to Palantir, was used during last month’s military operation in Venezuela, during which 83 people were purportedly killed, according to Venezuela’s Defence Ministry. How many were soldiers or civilians is not really known to the public, but Amodei has resurfaced Anthropic’s mandate for “safety and transparency” in how its technology is used.
This stance seems to differ from that of OpenAI, Google, and xAI, who have reportedly agreed in principle to let their models be used for any “lawful purpose” determined by the military. At the time of publishing, RCRTech has reached out to each for comment, and will report on any responses that come in.
Whether this high-profile conflict between Anthropic and the U.S. Dept. of War will lead to more conversations about “sensible AI regulation,” remains to be seen. In the meantime, Amodei yesterday announced a company shift in its “Responsible Scaling Policy,” seemingly moving away from hard safety commitments toward “nonbinding but publicly-declared” goals.