ANTHROPIC

When access slips

Anthropic is investigating claims that a small group of people accessed its Claude Mythos model without permission. 

The company said it is looking into whether the model was reached through one of its third-party vendor systems after Bloomberg reported that users in a private forum were able to use it without normal approval.

Claude Mythos is a cybersecurity model that Anthropic has not released publicly, saying it is too powerful for open access. 

It has only been shared with a small number of tech and finance firms to help them spot and fix security flaws. 

Anthropic says it has no evidence that its own systems were breached, and there is no sign so far that malicious actors accessed the model.

Still, the report raises questions about how well powerful AI models can be controlled once outside partners are involved. 

Bloomberg said the person linked to the access already had permission to view Anthropic’s models through work with a third-party contractor. 

Experts say this appears more like a misuse of access than a traditional hack.

The bigger concern is how these tools could be used if they fall outside the right controls. 

Advanced AI models could support fraud, cyber abuse, or other harmful activity. 

But UK officials say the technology could also improve cyber defense if it is secured properly.

Key points:

  • Anthropic is investigating whether Claude Mythos was accessed through a third-party vendor system.

  • The case shows how hard it is to control powerful AI models once access is shared externally.

  • UK officials say AI could strengthen cyber defence, but only if basic security issues are fixed too.

A vendor wrinkle 

Speaking at the CyberUK conference, NCSC chief Richard Horne said frontier AI is making it easier to find and exploit existing weaknesses at scale.

But he stressed that poor cyber basics, like outdated software and weak security practices, remain the bigger issue.

Security Minister Dan Jarvis also urged AI firms to work with the government to help protect critical infrastructure. 

The UK still relies on companies like Anthropic for access to these tools, since the most advanced models are mainly built in the US and China. 

OpenAI is also developing a cyber-focused model, GPT 5.4 Cyber, showing how quickly this space is growing.

The scariest part is how boring the leak sounds.- MG

Keep Reading