A high-stakes clash between Silicon Valley and Washington is unfolding over who controls powerful artificial intelligence systems.
At the center are Dario Amodei and U.S. Defense Secretary Pete Hegseth — and the outcome could reshape how AI is used in war and surveillance.
Who controls military AI?
Over the past two weeks, tensions have escalated between Anthropic and the Pentagon over how the military can deploy advanced AI models.
Anthropic has refused to allow its models to be used for mass domestic surveillance of Americans or for fully autonomous weapons that conduct strikes without human input. Hegseth, meanwhile, argues that the Defense Department should not be constrained by a private company’s internal policies and should be free to use the technology for any “lawful use.”
At its heart, the conflict raises a fundamental question: Should AI developers retain control over how their systems are used, or does the government get the final say when national security is involved?
What is Anthropic worried about?
Anthropic says its concern is not abstract. The company argues that AI technology carries unique risks and requires unique safeguards — especially when applied to lethal systems or surveillance tools.
Autonomous weapons
The U.S. military already uses highly automated systems, some of them lethal. While decisions to use deadly force have traditionally involved human oversight, the Pentagon does not categorically ban fully autonomous weapons.
Under a 2023 Defense Department directive, AI systems can select and engage targets without human intervention — provided they meet certain standards and are approved by senior defense officials.
That possibility makes Anthropic uneasy. Military programs are often classified, meaning the public might not know if lethal decision-making becomes fully automated until after deployment. If such use falls within legal parameters, it could still qualify as “lawful use.”
Anthropic’s position is not that these applications should be permanently prohibited. Instead, the company says its current models are not yet capable enough to support such high-stakes roles safely.
A malfunctioning or less-capable AI could misidentify targets, escalate conflicts unintentionally, or make split-second lethal decisions without meaningful human oversight.
Mass surveillance
AI also has the potential to expand lawful surveillance in unprecedented ways.
While U.S. law already permits certain forms of surveillance — including collection of texts and emails under specific authorities — AI can dramatically scale those efforts. Automated pattern detection, predictive risk scoring, and continuous behavioral analysis could supercharge domestic monitoring.
Anthropic argues that its models should not be used in ways that enable large-scale surveillance of American citizens.
What does the Pentagon want?
The Pentagon’s position is straightforward: if a use is lawful, it should not be blocked by a vendor.
Sean Parnell, the department’s chief spokesperson, said the Defense Department has no interest in conducting mass domestic surveillance or deploying autonomous weapons. However, he insisted the Pentagon must retain operational control.
“We will not let any company dictate the terms regarding how we make operational decisions,” Parnell said in a public post, giving Anthropic until 5:01 p.m. ET on Friday to agree. Otherwise, the Pentagon would terminate its partnership and designate the company a “supply chain risk.”
Hegseth has also framed the issue in cultural terms. In a January speech at SpaceX and xAI offices, he criticized what he described as “woke AI,” saying the Department of War would build “war-ready weapons and systems, not chatbots for an Ivy League faculty lounge.”
Trump and the Supply-Chain Risk Label
The dispute escalated further when President Donald Trump directed federal agencies to cease use of Anthropic products, allowing a six-month phase-out period.
Shortly afterward, Hegseth formally designated Anthropic a supply-chain risk to national security. Effective immediately, no contractor, supplier, or partner doing business with the U.S. military may conduct commercial activity with the company.
The label effectively blacklists Anthropic from government work.
Industry reaction
Anthropic CEO Dario Amodei has publicly refused to back down, reiterating that the company wants to continue serving the Department — but only with its two safeguards in place.
“Our strong preference is to continue to serve the Department and our warfighters — with our two requested safeguards in place,” he said.
OpenAI has backed Anthropic’s stance. CEO Sam Altman reportedly told staff that OpenAI shares similar “red lines,” rejecting domestic surveillance and autonomous offensive weapons in defense contracts.
Ilya Sutskever also voiced support, calling it significant that competitors were aligning on core principles despite past disputes.
Meanwhile, xAI — owned by Elon Musk — is reportedly preparing to become classified-ready and could step in as a replacement provider.
What happens next?
This standoff carries serious consequences.
A supply-chain risk designation could severely damage Anthropic’s business. Venture investor Sachin Seth described it as potentially “lights out” for the company.
At the same time, removing Anthropic from Defense Department contracts could create a capability gap. According to Seth, it could take six to twelve months for competitors to match Anthropic’s model performance, leaving the Pentagon relying on a second- or third-best option.
The broader implications go beyond one contract.
This fight sets a precedent for how AI companies and governments negotiate control over emerging technologies. As AI systems grow more powerful — and more embedded in defense, intelligence, and surveillance infrastructure — similar clashes are likely to intensify.
For now, the outcome will signal whether AI developers can enforce ethical guardrails on their creations — or whether national security demands override corporate red lines.







