Anthropic, a prominent artificial intelligence developer, has submitted two sworn declarations to a California federal court, vigorously refuting the Pentagon’s assertion that the company poses an "unacceptable risk to national security." Filed late Friday afternoon, these declarations contend that the government’s case is built upon fundamental technical misunderstandings and allegations that were conspicuously absent during months of preceding negotiations. This legal maneuver, accompanying Anthropic’s reply brief in its lawsuit against the Department of Defense, sets the stage for a critical hearing scheduled for this coming Tuesday, March 24, before Judge Rita Lin in San Francisco.

The Genesis of a High-Stakes Dispute

The highly publicized conflict between the AI innovator and the U.S. defense establishment erupted in late February, when President Trump and Defense Secretary Pete Hegseth publicly announced the termination of their relationship with Anthropic. The stated reason was the company’s refusal to grant unrestricted military use of its advanced AI technology, a stance rooted in its publicly articulated principles regarding responsible AI deployment. This decision immediately drew attention due to Anthropic’s previous, significant engagement with the Pentagon, including a substantial $200 million contract announced just last summer, aimed at advancing responsible AI in defense operations through its Claude models. The abrupt dissolution of this partnership, and the subsequent "supply-chain risk designation" applied to Anthropic—reportedly the first ever levied against an American company—has ignited a crucial debate over corporate ethics, national security imperatives, and the future of AI development.

Anthropic’s Counter-Arguments: Voices from Within

The two sworn declarations were submitted by key Anthropic personnel: Sarah Heck, Head of Policy, and Thiyagu Ramasamy, Head of Public Sector. Their testimonies aim to dismantle the Pentagon’s narrative by providing insider accounts and technical clarifications.

Sarah Heck’s Declaration: Exposing Contradictions and Unraised Issues

Sarah Heck, a former National Security Council official with White House experience under the Obama administration before joining Anthropic via Stripe, plays a pivotal role in managing the company’s government relationships and policy work. Her declaration directly addresses what she labels a "central falsehood" in the government’s court filings: the claim that Anthropic sought an approval role over military operations. Heck emphatically denies this, stating, "At no time during Anthropic’s negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role." This assertion challenges the very premise of one of the Pentagon’s core concerns, suggesting a misrepresentation of Anthropic’s negotiating position.

Furthermore, Heck’s declaration highlights another critical point: the Pentagon’s concern regarding Anthropic’s potential ability to disable or alter its technology mid-operation. According to Heck, this specific apprehension was never raised during the extensive negotiation period. Instead, it surfaced for the first time in the government’s formal court filings, effectively denying Anthropic the opportunity to address or clarify this technical point during discussions. This omission, Heck implies, points to either a strategic withholding of concerns or a lack of understanding that only emerged later.

Perhaps one of the most compelling pieces of evidence presented in Heck’s declaration is an email from Under Secretary Emil Michael to Anthropic CEO Dario Amodei, dated March 4. This date is notably just one day after the Pentagon formally finalized its supply-chain risk designation against Anthropic. In the email, Michael indicated that the two sides were "very close" on the very issues the government now cites as foundational to Anthropic being a national security threat: its positions on autonomous weapons and mass surveillance of Americans.

This email, attached as an exhibit to Heck’s declaration, creates a stark contrast with Michael’s subsequent public statements. On March 5, Amodei had published a statement affirming "productive conversations" with the Pentagon. The very next day, Michael posted on X (formerly Twitter) that "there is no active Department of War negotiation with Anthropic," and a week later, he told CNBC there was "no chance" of renewed talks. Heck’s testimony implicitly questions the Pentagon’s consistency: if Anthropic’s stance on autonomous weapons and mass surveillance was indeed the critical factor making it a national security threat, why was a senior Pentagon official suggesting they were "very close" to alignment on these issues immediately after the designation was finalized? While Heck refrains from explicitly accusing the government of using the designation as a bargaining chip, the timeline she presents strongly invites that interpretation, raising questions about the true motivations behind the Pentagon’s actions.

Thiyagu Ramasamy’s Declaration: Technical Rebuttals to Security Claims

Thiyagu Ramasamy brings a crucial technical perspective to Anthropic’s defense. Before joining Anthropic in 2025, he accumulated six years of experience at Amazon Web Services, where he managed AI deployments for government customers, including those in classified environments. At Anthropic, he is credited with building the team responsible for integrating the company’s Claude models into national security and defense settings, including the aforementioned $200 million contract.

Ramasamy’s declaration directly challenges the government’s claim that Anthropic could theoretically interfere with military operations by disabling or altering its technology. He asserts that such interference is simply not technically feasible. According to Ramasamy, once Claude is deployed within a government-secured, "air-gapped" system operated by a third-party contractor, Anthropic loses all access. He explains that there is "no remote kill switch, no backdoor, and no mechanism to push unauthorized updates." Any notion of an "operational veto" by Anthropic, he suggests, is a "fiction," emphasizing that any modification to the model would necessitate the Pentagon’s explicit approval and active installation. This technical explanation directly counters the national security risk argument by demonstrating the operational independence of the deployed AI.

Furthermore, Ramasamy clarifies that Anthropic cannot even monitor or access the data government users input into the system, let alone extract it. This addresses potential concerns about data exfiltration or unauthorized access to sensitive military information.

Ramasamy also disputes the government’s claim that Anthropic’s hiring of foreign nationals inherently poses a security risk. He highlights that all Anthropic employees involved in such deployments have undergone U.S. government security clearance vetting—the same rigorous background check process required for access to classified information. He adds that, "to my knowledge," Anthropic stands as the sole AI company where personnel with such clearances have actively built the AI models specifically designed to operate in classified environments. This detail underscores Anthropic’s commitment to security protocols and directly refutes the blanket assertion of foreign national hires as a security vulnerability.

Legal Battlegrounds: First Amendment vs. National Security

Anthropic’s lawsuit posits that the supply-chain risk designation, an unprecedented move against an American company, constitutes government retaliation for its publicly stated views on AI safety. The company argues this action violates its First Amendment rights to free speech. This legal argument introduces a novel and potentially far-reaching precedent for the relationship between the government and private technology firms, particularly those operating in sensitive sectors.

The government, in a comprehensive 40-page filing earlier this week, vehemently rejected Anthropic’s framing. It argued that Anthropic’s refusal to permit all lawful military uses of its technology was a business decision, not a form of protected speech. The Department of Defense maintained that the designation was a straightforward national security determination, devoid of any punitive intent related to the company’s ethical viewpoints. This fundamental disagreement over the nature of Anthropic’s "red lines" — whether they constitute a business choice or an exercise of free speech — forms the crux of the impending legal showdown.

Broader Implications: AI Ethics, Government Procurement, and Free Speech

This high-profile dispute transcends the immediate parties, casting a long shadow over the rapidly evolving landscape of artificial intelligence, national security, and corporate responsibility.

The Evolving Role of AI in Defense: The U.S. Department of Defense has increasingly emphasized the strategic importance of AI for maintaining a technological edge. Investments in AI research and development for military applications are soaring, with global spending on AI in defense projected to reach tens of billions of dollars annually in the coming years. This push often brings the military into direct collaboration with cutting-edge private sector companies like Anthropic, which possess the specialized expertise and computational resources to develop advanced AI systems. However, these collaborations frequently encounter friction when companies adopt strong ethical stances, particularly concerning the use of AI in autonomous weapons systems or mass surveillance, areas where ethical guidelines remain fluid and hotly debated.

Ethical AI and the Dual-Use Dilemma: Anthropic’s "red lines" are not unique; they reflect a growing movement within the tech industry to establish ethical frameworks for AI development, especially for "dual-use" technologies that have both civilian and military applications. Precedents exist, such as Google’s Project Maven, where employee backlash over the company’s involvement in a Pentagon AI project led to Google withdrawing from the contract and establishing its own ethical AI principles. This case highlights the persistent tension between the immense potential of AI to enhance national security capabilities and the moral obligations felt by AI developers to ensure their creations are used responsibly and ethically. The question of who defines "responsible use" and how those definitions are enforced becomes paramount.

Government Procurement and Tech Collaboration Challenges: The dispute underscores the inherent challenges in fostering collaboration between a hierarchical government entity focused on national security and agile private tech companies often driven by innovation and, increasingly, by corporate values. The government seeks unfettered access and control over technologies deemed critical for defense, while many tech companies, particularly those founded on strong ethical principles like Anthropic, seek to embed safeguards and retain a degree of oversight over how their technology is deployed, especially in sensitive contexts. This case could establish a precedent for how future contracts are negotiated, potentially leading to more explicit clauses regarding ethical use, or conversely, making tech companies more cautious about engaging with defense agencies.

First Amendment Precedent: The legal argument regarding the First Amendment is particularly significant. If the court finds that applying a supply-chain risk designation due to a company’s publicly stated ethical positions constitutes an infringement on free speech, it could profoundly alter the dynamics between the government and the private sector. It would set a powerful precedent for companies to articulate their ethical boundaries without fear of direct governmental reprisal in the form of business restrictions. Conversely, if the court sides with the government, it could be seen as empowering the state to exert greater control over the operational choices of companies deemed critical to national security, potentially chilling free expression of corporate values in sensitive sectors. Legal experts suggest that the "business decision" versus "protected speech" distinction will be heavily scrutinized, as it delves into the intent behind Anthropic’s actions and the government’s response.

The Road Ahead: A Pivotal Hearing

The upcoming hearing before Judge Rita Lin is expected to be a pivotal moment in this unfolding legal drama. The court will likely weigh the technical arguments presented by Anthropic against the national security concerns articulated by the Pentagon. The judge’s decision will not only determine the immediate fate of Anthropic’s lawsuit but could also significantly influence the future landscape of AI development, government contracting, and the delicate balance between corporate ethics and national security imperatives in the United States. The outcome will be closely watched by the tech industry, defense strategists, and civil liberties advocates alike, as it promises to shape how America harnesses the power of artificial intelligence in the decades to come.

Leave a Reply

Your email address will not be published. Required fields are marked *