What’s behind the Anthropic-Pentagon feud


Washington — The Pentagon gave Anthropic an ultimatum this week: Give the U.S. military unrestricted use of its AI technology or face a ban from all government contracts. 

At the center of the issue is a question of who controls how artificial intelligence models are used, the Pentagon or the company’s CEO.

The Pentagon’s AI contracts 

The Pentagon awarded Anthropic a $200 million contract in July to develop AI capabilities that would advance U.S. national security. 

Anthropic’s rivals, including OpenAIGoogle and xAI were also awarded $200 million contracts by the Pentagon last year. 

Anthropic is currently the only AI company to have its model deployed on the Pentagon’s classified networks, through a partnership with data analytics giant Palantir.

A senior Pentagon official told CBS News that Grok, which is owned by Elon Musk’s xAI, is on board with being used in a classified setting, and other AI companies are close. 

The Pentagon announced last month that it’s looking to accelerate its uses of AI, saying the technology could help the military “rapidly convert intelligence data” and “make our Warfighters more lethal and efficient.”

Clash over the guardrails 

The standoff between the Pentagon and Anthropic was reportedly set off by the U.S. military’s use of its technology, known as Claude, during the operation to capture former Venezuela President Nicolás Maduro in January. 

Anthropic has repeatedly asked the Pentagon to agree to certain guardrails, among them a restriction on using Claude to conduct mass surveillance of Americans, sources told CBS News. 

And the company also wants to ensure Claude is not used by the Pentagon for final targeting decisions in military operations without any human involvement, one source familiar with the matter said. Claude is not immune from hallucinations and not reliable enough to avoid potentially lethal mistakes, like unintended escalation or mission failure without human judgment, the source said.  

When asked for comment, a senior Pentagon official said: “This has nothing to do with mass surveillance and autonomous weapons being used. The Pentagon has only given out lawful orders.”

Pentagon officials have expressed concerns to Anthropic that the company’s guardrails could stand in the way of critical actions, such as responding to an intercontinental ballistic missile launched toward the United States.

Any company-imposed restrictions “could create a dynamic where we start using them and get used to how those models work, and when it comes that we need to use it in an urgent situation, we’re prevented from using it,” Emil Michael, the undersecretary of defense for research, said at an event in February.

On the question of when AI is used to strike or kill military targets and makes a mistake, who is liable — the military or the AI company — a defense official said: Legality is the Pentagon’s responsibility as the end user.

What top leaders are saying  

Anthropic CEO Dario Amodei has been vocal in expressing his concerns about the potential dangers of AI and has centered the company’s brand around safety and transparency. 

In a lengthy essay last month, Amodei warned of the potential for abuse of the technologies, writing that “a powerful AI looking across billions of conversations from millions of people could gauge public sentiment, detect pockets of disloyalty forming, and stamp them out before they grow.” 

“Democracies normally have safeguards that prevent their military and intelligence apparatus from being turned inwards against their own population, but because AI tools require so few people to operate, there is potential for them to circumvent these safeguards and the norms that support them. It is also worth noting that some of these safeguards are already gradually eroding in some democracies,” he wrote. 

Amodei has long backed what he describes as “sensible AI regulation,” including rules that would require AI companies to be transparent about the risks posed by their models and any steps taken to mitigate them.

The Trump administration, meanwhile, has favored a lighter touch, and has argued that stringent AI regulations could stifle innovation and make it harder for the American AI industry to compete. The administration has sought to block what it calls “excessive” state-level regulations. At one point last year, venture capitalist and White House AI and crypto adviser David Sacks accused Anthropic of “fear-mongering” and suggested its interest in AI regulations is self-serving.

In a January speech, Defense Secretary Pete Hegseth derided what he views as “social justice infusions that constrain and confuse our employment of this technology.” 

“We will not employ AI models that won’t allow you to fight wars,” Hegseth declared. “We will judge AI models on this standard alone; factually accurate, mission relevant, without ideological constraints that limit lawful military applications. Department of War AI will not be woke. It will work for us. We’re building war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.” 

What’s next in the Anthropic v. Pentagon saga

Hegseth gave Anthropic until Friday to agree to give the U.S. military unrestricted use of its technology or risk being blacklisted, sources familiar with the situation told CBS News. 

Pentagon officials are considering invoking the Defense Production Act to compel Anthropic to comply on national security grounds.

Or, if an agreement can’t be reached, defense officials have discussed declaring the company a “supply chain risk” to push it out of government, according to the sources. 



Source link