Anthropic’s most dangerous AI model just fell into the wrong hands

April 22, 2026 Jess Weatherbed

Vector illustration of the Anthropic logo.

Anthropic's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a "small group of unauthorized users," Bloomberg reports. An unnamed member of the group, identified only as "a third-party contractor for Anthropic," told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor's access and "commonly used internet sleuthing tools."

The Claude Mythos Preview is a new general-purpose model that's capable of identifying and exploiting vulnerabilities "in every major operating system and every major web browser …

Read the full story at The Verge.

Previous Article
Anker made its own chip to bring AI to all its products
Anker made its own chip to bring AI to all its products

Anker has announced its own custom silicon that the company says will bring local AI to audio devices, mobi...

Next Article
SpaceX cuts a deal to maybe buy Cursor for $60 billion
SpaceX cuts a deal to maybe buy Cursor for $60 billion

With an IPO looming for Elon Musk's SpaceX / xAI / X combo platter of companies, SpaceX has announced an od...