Defense Secretary Demands Full Military Access to Anthropic AI
WASHINGTON, D.C. – Defense Secretary Pete Hegseth has issued an ultimatum to Anthropic, a leading artificial intelligence company, demanding that the firm allow the U.S. military unrestricted access to its AI technology or risk losing a lucrative government contract. The call for compliance follows a high-stakes meeting between Hegseth and Anthropic’s CEO, Dario Amodei, as sources familiar with the discussion revealed.
Anthropic is renowned for its chatbot, Claude, and stands as the only major AI company not yet providing its technology to a new secure internal network for U.S. military applications. The company’s leadership has long voiced ethical reservations about the military’s use of artificial intelligence, particularly in sensitive areas such as autonomous weapon systems and mass surveillance capabilities.
Ethical Tensions and Pentagon Threats
According to a source who attended the meeting and spoke on condition of anonymity, Hegseth gave Amodei until the end of the week to agree to the Pentagon’s terms. Should Anthropic refuse, the Pentagon could not only terminate the contract but also designate the company as a supply chain risk or invoke the Defense Production Act, which would grant the military broader authority over the use of Anthropic’s technology.
Despite the mounting pressure, Amodei remained resolute on two critical boundaries: Anthropic would not support fully autonomous military targeting operations or domestic surveillance of U.S. citizens. These lines in the sand highlight the ongoing debate over the balance between national security interests and ethical concerns around the use of AI in warfare and surveillance.
Amodei recently articulated his concerns in a published essay, emphasizing the risks of powerful AI systems being leveraged to monitor public sentiment, detect dissent, and suppress opposition before it can spread. His stance underscores the broader anxiety that AI could be weaponized for oppressive purposes if left unchecked.
Anthropic’s Unique Position in Military AI
The Pentagon has awarded defense contracts worth up to $200 million each to four AI companies: Anthropic, Google, OpenAI, and Elon Musk’s xAI. Of these, Anthropic was the first to receive approval to work on classified military networks, collaborating with partners such as Palantir. The other companies operate in unclassified environments for now.
Notably, by early 2026, Hegseth had begun to publicly champion only xAI and Google, expressing skepticism toward AI models that he believed imposed “ideological constraints.” In a speech in January, Hegseth declared, “AI will not be woke,” signaling his administration’s intent to remove what he sees as unnecessary limitations on the military’s use of advanced technology.
Musk’s Grok chatbot, despite facing global scrutiny for generating inappropriate deepfake images, was recently announced as the latest addition to the Pentagon’s GenAI.mil network. OpenAI’s ChatGPT is also being integrated for unclassified military tasks.
Anthropic’s Commitment to Responsible AI
Since its founding in 2021 by former OpenAI staff, Anthropic has positioned itself as a safety-first AI company, frequently aligning with efforts for greater oversight and third-party review of AI systems. This approach has sometimes put Anthropic at odds with both the Trump and Biden administrations, particularly around issues of AI export controls and regulatory policy.
Owen Daniels, of Georgetown University’s Center for Security and Emerging Technology, noted that, “Anthropic’s peers, including Meta, Google, and xAI, have agreed to let the Department of Defense use their AI models for all lawful purposes. This limits Anthropic’s bargaining power and risks reducing its influence within the Pentagon’s rapid adoption of AI.”
Amodei has consistently warned of the catastrophic potential of advanced AI, cautioning that the risks are growing as technology progresses. While he rejects the “doomer” label, he advocates for managing these threats in a pragmatic, realistic manner—insisting on strict boundaries for military use.
Political and Regulatory Backdrop
Anthropic’s cautious stance has clashed with Trump administration policies, particularly around chip exports to China and efforts to regulate AI at the state level. Trump’s chief AI adviser, David Sacks, has accused the company of stoking fear to influence regulation, while Anthropic has sought to present itself as bipartisan, recently adding Chris Liddell, a former Trump White House official, to its board.
Amos Toh, senior counsel at NYU’s Brennan Center for Justice, highlighted the need for congressional oversight amid the Pentagon’s rapid embrace of AI. “The law is not keeping up with how quickly the technology is evolving,” Toh commented, warning that the Department of Defense does not have a “blank check” to deploy AI without adequate safeguards, especially in areas like domestic surveillance.
Looking Ahead
The outcome of this standoff remains uncertain. As the Pentagon accelerates its integration of AI into military operations, the debate over ethics, oversight, and national security is intensifying. Whether Anthropic will hold its ground or bow to government pressure could set a precedent for how AI companies navigate the complex intersection of innovation, responsibility, and defense.
This article is inspired by content from Original Source. It has been rephrased for originality. Images are credited to the original source.
