ANTHROPIC

One “no” just shook the AI industry

Donald Trump has ordered US federal agencies to phase out Anthropic’s technology after a dispute with the Pentagon over how its AI should be used.

The issue centres on military access to Anthropic’s model, Claude.

The Pentagon pushed for broader usage, while CEO Dario Amodei refused without clear safeguards.

Anthropic said it wanted assurances that its AI would not be used for mass surveillance or fully autonomous weapons.

It also raised concerns that proposed contract terms could override these protections.

In short:

  • The US government is phasing out Anthropic after a disagreement over AI use

  • The conflict focuses on limits around surveillance and autonomous weapons

  • Tech leaders are split on whether safety or access should take priority.

The pressure point

The Pentagon responded that it does not plan to use AI for illegal surveillance or fully autonomous weapons, but did not provide detailed use cases.

Officials warned Anthropic could face contract cancellation or be labelled a “supply chain risk” if it did not comply.

The situation has divided the tech industry. Some leaders criticised Anthropic’s stance, while others, including OpenAI’s Sam Altman, supported its focus on safety.

Anthropic has indicated it may step away from the contract, as the Pentagon explores alternative providers.

“We won’t misuse it” is doing a lot of heavy lifting here.- MG

Keep Reading