Pentagon’s AI Standoff: Secret Email Exposed!

Hustler Words – A dramatic new court filing has intensified the legal battle between artificial intelligence innovator Anthropic and the U.S. Department of Defense, revealing internal communications that appear to contradict the Pentagon’s public stance. Sworn declarations submitted to a California federal court late Friday directly challenge the DoD’s assertion that Anthropic poses an "unacceptable risk to national security," arguing that the government’s case is built on technical misunderstandings and claims never previously raised during months of negotiations. This development comes just a week after former President Trump publicly declared the relationship severed, yet a newly unearthed email suggests the two parties were "very close" to alignment on critical issues.

These sworn statements accompanied Anthropic’s reply brief in its lawsuit against the Department of Defense, filed ahead of a crucial hearing scheduled for this Tuesday, March 24, before Judge Rita Lin in San Francisco. The dispute originated in late February when President Trump and Defense Secretary Pete Hegseth publicly announced they were cutting ties with Anthropic, citing the company’s refusal to permit unrestricted military use of its advanced AI technology.

Pentagon's AI Standoff: Secret Email Exposed!
Special Image : www.usatoday.com

One of the key affidavits was provided by Sarah Heck, Anthropic’s Head of Policy and a former National Security Council official. Heck, who personally attended a pivotal February 24 meeting with Defense Secretary Hegseth and Under Secretary Emil Michael, directly refutes what she describes as a "central falsehood" in the government’s filings: the claim that Anthropic demanded an approval role over military operations. "At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," her declaration asserts.

COLLABMEDIANET

Heck further contends that the Pentagon’s concern about Anthropic potentially disabling or altering its technology mid-operation was conspicuously absent from all prior negotiations, only surfacing for the first time in the government’s subsequent court documents. This, she argues, denied Anthropic any opportunity to address the issue beforehand.

Perhaps the most startling revelation from Heck’s declaration is an email dated March 4. Sent by Under Secretary Michael to Anthropic CEO Dario Amodei, just one day after the Pentagon formally finalized its supply-chain risk designation against the company, the email stated that the two sides were "very close" on the very issues now cited as national security threats: autonomous weapons and mass surveillance of Americans. This directly contradicts Michael’s public statements days later, where he first claimed "there is no active Department of War negotiation with Anthropic" on X, and then a week later, told CNBC there was "no chance" of renewed talks. Heck’s testimony implicitly questions the government’s sincerity, suggesting a significant disconnect between internal communications and public posturing regarding the designation.

Complementing Heck’s policy insights, Thiyagu Ramasamy, Anthropic’s Head of Public Sector, offers a robust technical rebuttal. Ramasamy, who previously managed AI deployments for government clients at Amazon Web Services, including classified environments, addresses the Pentagon’s concern that Anthropic could interfere with military operations. He asserts that once Anthropic’s Claude models are deployed within a government-secured, "air-gapped" system managed by a third-party contractor, Anthropic loses all access. There is no remote "kill switch," no backdoor, and no mechanism for unauthorized updates. Any operational changes, he explains, would necessitate the Pentagon’s explicit approval and manual installation, rendering the notion of an "operational veto" technically impossible. Furthermore, Ramasamy clarifies that Anthropic cannot monitor user input within these secure government systems, let alone extract data.

Ramasamy also challenges the government’s claim that Anthropic’s hiring of foreign nationals constitutes a security risk. He highlights that Anthropic employees undergo stringent U.S. government security clearance vetting – the same rigorous process required for access to classified information. To his knowledge, he states, Anthropic stands as the sole AI company where cleared personnel have actually developed AI models specifically designed for classified environments.

Anthropic’s lawsuit frames the supply-chain risk designation – an unprecedented move against an American company – as government retaliation for its publicly stated positions on AI safety, a potential violation of the First Amendment. The government, in its own 40-page filing earlier this week, firmly rejected this interpretation, maintaining that Anthropic’s refusal to permit all lawful military uses of its technology was a business decision, not protected speech. They argue the designation was a legitimate national security assessment, not a punitive measure for the company’s views.

As the two sides prepare for the upcoming hearing, the newly revealed internal communications and detailed technical explanations from Anthropic’s executives are poised to add significant complexity to a dispute that has far-reaching implications for the future of AI development and its integration with national defense. The outcome of this legal battle could redefine the boundaries of corporate autonomy, national security, and the role of advanced technology in governmental operations.

If you have any objections or need to edit either the article or the photo, please report it! Thank you.

Tags:

Follow Us :

Leave a Comment