Dario Amodei, co-founder and chief executive officer of Anthropic, at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026.
Prakash Singh | Bloomberg | Getty Images
Following the Trump administration’s decision on Friday to blacklist Anthropic and designate its technology a supply chain risk, defense tech companies are telling employees to stop using Claude, and to switch to other artificial intelligence models and assistants.
“Most of our companies are actively involved in large defense contracts and so are very strict in their interpretation of the requirements,” said Alexander Harstrick, managing partner at J2 Ventures, which backs startups in the space.
Harstrick told CNBC in an email that 10 of his firm’s portfolio companies that work with the Department of Defense, “have backed off of their use of Claude for defense use cases and are in active processes to replace the service with another one.”
Meanwhile, defense contractors like Lockheed Martin are expected to remove Anthropic’s technology from their supply chains, Reuters reported late Tuesday.
It’s a sudden reversal for Anthropic, which gets about 80% of its revenue from enterprise customers, CEO Dario Amodei told CNBC in January.
The company entered the DoD’s ecosystem in late 2024 through a partnership with software and services provider Palantir. Months after that agreement, Claude became the first major model deployed in the government’s classified networks through a $200 million contract with the DoD. The model’s popularity continued to soar across the business world, particularly in the area of coding assistants.
Defense Secretary Pete Hegseth declared on X that any contractor or supplier doing business with the U.S. military is barred from commercial activity with Anthropic.
The announcement came after Anthropic executives refused to comply with the government’s demands over its model use. They wanted assurances that their AI would not be tapped for fully autonomous weapons or mass domestic surveillance of Americans.
Anthropic’s models are still being used to support the U.S. military operations in Iran, even after the announcement from the Trump administration, as CNBC previously reported.
The matter is far from resolved.

Anthropic can still appeal through the legal system, but has yet to act because nothing official has happened — it’s mostly been limited to social media posts.
Anthropic said in a blog post on Friday, citing a federal statute enacted by Congress, that Hegseth lacks the authority to restrict companies that work with Anthropic from doing business with the government.
Should the supply chain risk designation be made official, it would only apply to companies’ use of Claude as part of defense contracts and “cannot affect how contractors use Claude to serve other customers,” the company wrote.
Anthropic declined to comment beyond pointing to a blog post.
But multiple defense tech execs, who asked not to be named because of the sensitivity of the matter, said they’re preemptively moving their workforce off of Claude.
One defense company executive said they told employees last week to start switching out Claude for other models, including some open-source options, a process that could take a week or two. That was in preparation for Friday’s deadline as both sides refused to budge.
The CEO of another defense tech company said this week that employees were directed on Monday to stop using Claude until they’re given further guidance. The company has to assume a ban will go into effect, the person said.
‘Abundance of caution’
Harstrick, who served as a military intelligence officer in the Army reserves and deployed to Afghanistan and Iraq in 2017, said his companies are switching “out of an abundance of caution.”
“This in no way reflected a perceived shortcoming of Claude with most commenting that the situation was lamentable as the product itself is excellent,” he wrote.
Hours after the Pentagon’s announcement on Friday, OpenAI CEO Sam Altman said in a post on X that his company had agreed to terms with the DoD on the use of its AI models. After facing a barrage of criticism over the weekend, Altman followed up with a post on Monday acknowledging his timing was “sloppy” and that the company “shouldn’t have rushed” the deal.
Altman posted an internal memo saying the company would amend the contract to include new language to clarify that “the AI system shall not be intentionally used for domestic surveillance of U.S. persons and nationals.”
President Donald Trump said in a social media post on Friday that federal agencies will have six months to phase out their use of the technology. So far, in addition to the Defense Department, officials at the Treasury Department, State Department and Health and Human Services have directed employees to move off Claude.
One venture investor in defense tech told CNBC that any serious company doing business with the federal government would avoid dependency on a single supplier, so switching off Anthropic shouldn’t pose a major problem.
Palantir Technologies CEO Alex Karp attends the 56th annual World Economic Forum (WEF) meeting in Davos, Switzerland, January 20, 2026.
Denis Balibouse | Reuters
Palantir, which counts on the government for close to 60% of its U.S. revenue, declined to comment on its plans.
Analysts at Piper Sandler wrote in a note to clients on Tuesday that Anthropic is “heavily embedded in the Military and the Intelligence community” and that moving off the company’s technology could “pose some short-term disruptions” to Palantir’s operations.
“While re-establishing AI functions with a new vendor can and will happen if needed, Anthropic was [a] trailblazer in terms of operationalizing AI models for data-sensitive environments,” wrote the analysts, who have a buy rating on Palantir’s stock. “Onboarding and negotiating replacement technology will take time and resources” that could have been “spent on growth opportunities,” they wrote.
Not everyone is acting hastily.
C3 AI Chairman and former CEO Tom Siebel counts the DoD as a customer and has a partnership with consulting firm Booz Allen Hamilton. Siebel said in an interview that he doesn’t see a “need to mitigate” Claude at this time, “until it gets litigated.”
A partner at a defense-focused venture firm said his portfolio companies have limited exposure to Claude and are mostly users of OpenAI’s technology.
Tara Chklovski, CEO of Technovation, a global tech education nonprofit, said that if the Defense Department pursues this strategy to its end and cuts off Anthropic, it could be a dangerous decision.
She said Anthropic has been the most deliberate model creator when it comes to building systems for the military, and that any alternative supplier the government uses will be less safe.
The government also has contracts with Google for use of Gemini and with Elon Musk’s xAI for Grok.
“Once the dust settles, they’ll realize that Anthropic is the only one that has this very unique set of skills in technology,” Chklovski said. “Competition is so fierce that people think going fast and without the weight of these safeguards is the only way to succeed. Anthropic is showing that’s probably not the way.”
— CNBC’s Ashley Capoot and Jordan Novet contributed to this report
WATCH: What are your red lines?
https://www.cnbc.com/2026/03/04/pentagon-blacklist-anthropic-defense-tech-claude.html


