Robyn N. Burrows and Merle M. DeLancey, Jr. ●


On February 27, 2026, President Trump posted on Truth Social directing all federal agencies to “immediately cease” use of Anthropic’s artificial intelligence (“AI”) technology. Simultaneously, Defense Secretary Pete Hegseth announced on X he was designating the company a “supply chain risk to national security” and prohibiting federal contractors from doing any business with Anthropic. This unprecedented action against a domestic company has significant supply chain implications for government contractors. Below, we summarize what led to this development, the legal authorities pertaining to supply chain bans, and practical guidance for contractors navigating this evolving situation.
1. Background: From Contract Dispute to Presidential Directive
The conflict between Anthropic and the federal government emerged from a contract dispute over the company’s AI usage restrictions. Anthropic, which holds a $200 million Pentagon contract and was the first frontier AI company to deploy its models on classified government networks, maintained two “red lines” in its contract negotiations: it refused to allow its AI model, Claude, to be used for mass domestic surveillance of Americans or in fully autonomous weapons systems.
The Pentagon demanded that Anthropic agree to “all lawful use” of its technology without Anthropic’s proposed restrictions. Anthropic’s refusal led President Trump and Secretary Hegseth to announce their decisions against Anthropic on social media. Secretary Hegseth stated that Anthropic would be “immediately” designated a supply chain risk, prohibiting any federal contractor working with the military from “any commercial activity with Anthropic.”
Anthropic has announced it will challenge the supply chain risk designation in court, calling it “legally unsound.”
2. Potential Avenues of Implementation
The administration has yet to identify the specific legal mechanism it is relying upon to support its supply chain risk designation, making the scope and application of the Anthropic ban unclear. However, we discuss several potential legal authorities below pertaining to supply chain bans and how they might apply here:
10 U.S.C. § 3252: A supply chain risk designation is grounded in 10 U.S.C. § 3252 and implemented through DFARS Subpart 239.73, which allow the heads of covered agencies (including the Secretary of Defense) to exclude companies from a “covered procurement” involving “covered systems” when they pose a “supply chain risk.” However, this authority was primarily designed for foreign adversaries threatening the defense supply chain. The statute defines “supply chain risk” as the risk that “an adversary” may “sabotage, maliciously introduce unwanted function, or otherwise subvert” covered systems. There is question whether the dispute between the administration and Anthropic regarding contractual restrictions on the use of its AI falls within this definition. Further, Section 3252 is limited to Department of Defense (“DOD”) procurements for national security systems; it does not authorize a government-wide ban or otherwise grant authority to restrict federal contractors’ commercial relationships with Anthropic.
Federal Acquisition Supply Chain Security Act: Another potential avenue for implementing a supply chain ban is through the Federal Acquisition Supply Chain Security Act (“FASCSA”). The Federal Acquisition Security Council (“FASC”), an interagency body, must first recommend an exclusion or removal order, which is then published on SAM.gov. Such an order would prohibit contractors (via FAR 52.204-30) from providing or using Anthropic products and services in the performance of federal contracts. However, FASCSA orders do not extend to a contractor’s purely internal or commercial operations unrelated to federal contract performance and therefore could not be used to bar federal contractors from commercial dealings with Anthropic. To date, there is no indication the FASC has recommended such an order against Anthropic (there is only one reported FASC order involving a cybersecurity firm with reported Russian ties).
Legislation: The administration may potentially seek legislation to ban Anthropic, following the model Congress has used for other technology restrictions. For example, Section 889 of the FY 2019 National Defense Authorization Act (“NDAA”) prohibited federal agencies and contractors from using telecommunications equipment from five named Chinese manufacturers—and notably, the Section 889 “use” prohibition extends to all of a contractor’s business operations, including internal functions and purely commercial work. More recently, the FY 2025 NDAA prohibits DOD from contracting with any company that has a contract with a lobbyist who engages in lobbying activities for Chinese military companies on the 1260H List, and the FY 2026 NDAA prohibits DOD and its contractors from using “covered AI” from certain Chinese companies. The government may seek to employ a similar legislative approach that imposes a broader ban affecting federal contractors’ commercial relationships with Anthropic.
Executive Order: The President may potentially seek to ban Anthropic via executive order, likely relying on the International Emergency Economic Powers Act or national security. Such an order could be subject to legal challenge. Multiple presidents have used executive orders in procurement contexts, including orders restricting diversity, equity, and inclusion (“DEI”) practices and imposing new requirements on defense contractors, though several have faced litigation.
3. Practical Guidance for Contractors
Given the rapidly evolving and legally uncertain nature of this situation, federal contractors should consider the following steps:
Monitor official channels for formal implementation. To date, announcements regarding an Anthropic ban have been made via social media rather than through formal regulatory or contractual mechanisms. Supply chain risk designations typically require risk assessments and other formalized procedures before taking effect. Until those required processes are completed, the precise scope and timing of any supply chain ban remain unclear.
Identify current usage of Anthropic. In advance of any formal prohibition, contractors should inventory where Anthropic products (including Claude AI) are being used across their organizations—both in connection with federal contract work and for internal or commercial purposes. Understanding this usage now will help contractors assess the scope of any potential impact and prepare contingency plans if a ban takes effect.
Prepare contingency plans, but avoid premature action. A ban against Anthropic would not apply to most contractors unless and until it is formally included in their contracts via modification. Although contractors may wish to proactively identify potential alternative AI providers and develop transition plans, they should not rush to terminate existing arrangements before any Anthropic ban is formalized.
Track any legal challenge by Anthropic. To the extent Anthropic challenges a supply chain risk designation, that litigation may clarify or limit the government’s authority.
The situation remains fluid, and we will continue to monitor developments. For now, contractors should stay informed and prepared, but avoid taking precipitous action based on social media announcements alone.
