OPINION PIECE
Anthropic vs The Pentagon - what happened
Anthropic had a military contract with the Pentagon. When it came out that the DoD had used their models during the Maduro raid, Anthropic asked what exactly their tools were being used for. The Pentagon didn't appreciate being asked.
Anthropic had two restrictions written into their contract — no autonomous weapons without a human in the loop, no surveillance of US citizens. The government responded by officially designating Anthropic a supply chain risk and blacklisting them from DoD-connected business.
I actually respect the move. They weren't trying to shut down defence AI. They said: here are two specific things we won't allow, everything else is yours. The government said that wasn't good enough.
Then things got strange
The same day the blacklisting was announced, OpenAI stepped in and offered the Pentagon a deal. Everyone assumed that meant no restrictions — that OpenAI had agreed to let the DoD do whatever they wanted. OpenAI then said that wasn't true, and that they'd written the same two red lines into their own contract.
So the Pentagon blacklisted Anthropic for two restrictions, then signed with a company carrying the same two restrictions. That genuinely doesn't add up, and I don't have a clean explanation for it.
The week got stranger still. Anthropic's CEO said publicly they've been having productive conversations with the DoD and are still working toward a resolution. The DoD responded by saying there are no active negotiations with Anthropic, full stop. Both of those things cannot be true simultaneously. My best guess is negotiation tactics, but I honestly can't tell you which side is playing games.
The consumer reaction was unexpected
While all of this played out, Anthropic launched a one-click migration tool letting you bring your full ChatGPT memory and conversation history straight into Claude. ChatGPT uninstalls surged 295%. Claude hit the number one spot in the App Store — from outside the top ten. A year ago, OpenAI dominated enterprise AI usage almost completely. By February 2026, Anthropic had nearly closed that gap.
The Pentagon situation seems to have accelerated their growth rather than damaged it. People saw Anthropic hold those two lines and decided that was a company they trusted with their data.
My take
The restrictions Anthropic put in their contract — no autonomous weapons without human oversight, no surveillance of US citizens — are not radical positions. Arguing those two lines constitute a supply chain risk is going to be a hard case to make in court, which is apparently where this is heading. I think they win.
But the bigger story is that we're watching the first real public fight between an AI lab and a government customer over what AI models can and can't be used for. Both sides are making statements that openly contradict each other. It's going to be a long one.
-Matt Wolfe
Did you know Notebook LM is a productivity powerhouse? Matt Wolfe pulled together 20 tips for how to 3x your productivity with Google’s most underrated AI tool!


