The Pentagon just added a third company to its AI roster
Google has landed in the Pentagon, becoming the third major AI lab to supply the United States with models for classified government use.
This deal is part of a broader push to embed AI across US military operations. But what the Department of Defense is — and is not — permitted to do with Gemini is... pretty vague.
Google's contract is notably looser than the ones OpenAI and xAI signed with the DoD earlier this year. While OpenAI maintained discretion over safety guardrails, Google handed the reins to Uncle Sam in “a responsible approach to supporting national security.”
Why does the Pentagon need three different AI vendors, you may ask? Well, adding Google to the mix provides flexibility and helps prevent any single company from controlling the military’s supply chain.
The not-so-fine print:
The Pentagon may use Gemini for “any lawful government purpose,” which includes adjusting AI safety settings
The deal prohibits using AI for domestic mass surveillance and autonomous weapons (at least without human oversight)
Google has no veto power over how the DoD utilizes their technology
Whether these deal terms are legally enforceable is still TBD.
The Dissent
Plenty of people at Google fought against this very deal. Over 600 employees signed a letter urging CEO Sundar Pichai to reject the Pentagon’s proposal. They cited concerns that the technology would be used in “inhumane or extremely harmful ways.”
Pichai approved the contract less than one day later.
Even with contractual safeguards in place, there’s no guarantee that loopholes won’t be exploited. Which is precisely why Anthropic refused a similar agreement. The move earned them respect from AI ethicists and the label “supply-chain risk” from the US government.
One company's refusal became another's opportunity. History will determine who made the right call.
Anthropic held the line. Google redrew it… then handed over the pen. - TL


