Anthropic’s most dangerous AI model just fell into the wrong hands

The Verge / 4/22/2026

📰 NewsDeveloper Stack & InfrastructureSignals & Early TrendsModels & Research

Key Points

  • Anthropic’s Mythos AI cybersecurity model has reportedly been accessed by a small group of unauthorized users, raising concerns the system could be misused.
  • A third-party contractor tied to Anthropic allegedly provided an access path that the forum members exploited to reach Mythos.
  • The unauthorized users reportedly used common “internet sleuthing” techniques in combination with contractor access to gain entry.
  • The article frames Mythos as a capable, general-purpose model for identifying and exploiting vulnerabilities across major operating systems and web browsers.
  • The incident highlights security and access-control risks around powerful AI tools even before they are widely handled by legitimate customers.
Vector illustration of the Anthropic logo.

Anthropic's Mythos AI model, a powerful cybersecurity tool that the company said could be dangerous in the wrong hands, has been accessed by a "small group of unauthorized users," Bloomberg reports. An unnamed member of the group, identified only as "a third-party contractor for Anthropic," told the publication that members of a private online forum got into Mythos via a mix of tactics, utilizing the contractor's access and "commonly used internet sleuthing tools."

The Claude Mythos Preview is a new general-purpose model that's capable of identifying and exploiting vulnerabilities "in every major operating system and every major web browser …

Read the full story at The Verge.