It's a myth that you need Mythos to find bugs: Open source models can do it just as well

The Register / 4/24/2026

💬 OpinionDeveloper Stack & InfrastructureSignals & Early TrendsIdeas & Deep AnalysisModels & Research

Key Points

  • The article argues that bug finding does not require proprietary “Mythos”-style tooling because open-source models can perform similarly well for automated security testing.
  • OpenAI’s first security hire, Ari Herbert-Voss, believes increased automation in bug detection can strengthen security outcomes without necessarily eliminating jobs.
  • It frames automated bug finding as a shift in how security work is performed, emphasizing more efficient identification of vulnerabilities.
  • The piece highlights ongoing debate within security about which model access and tooling approaches are most effective for real-world bug detection.

It's a myth that you need Mythos to find bugs: Open source models can do it just as well

OpenAI's first security hire, Ari Herbert-Voss, thinks more automated bug finding will improve security without costing jobs

Fri 24 Apr 2026 // 11:41 UTC

Black Hat Asia Open source models can find bugs as effectively as Anthropic's Mythos, according to Ari Herbert-Voss, CEO of AI-powered security startup RunSybil and OpenAI's first security hire.

Speaking at the Black Hat Asia conference in Singapore today, Herbert-Voss said Mythos excels at finding both "shallow" bugs - well-described flaws that are and easy to validate - and more complex vulnerabilities.

In his talk, he attributed this to "supralinear scaling": where researchers assumed LLM capability would improve linearly, evidence now suggests a model trained on twice the data, compute, and time produces something four times more capable.

He hinted supralinear scaling might produce even better multipliers but could not say more due to a non-disclosure agreement.

Anthropic has kept access to Mythos tghtly restricted, citing fears of misuse.

However Herbert-Voss argues attackers and defenders alike can achieve comparable results with open source models by building "scaffolding" to run several of them in harness. That approach also improves defense in depth, as different models tend to catch different flaws — a useful hedge against any single model's blind spots.

Cost is another driver. Mythos is expensive to build and run, and may never be publicly available, making open source alternatives not just viable but necessary for many organizations.

Herbert-Voss feels human expertise is still needed to orchestrate open source models so they together deliver Mythos-grade performance, and to assess the bug reports AI generates.

He then noted that fuzzing, the testing technique which injects random or near-random data into software to see if doing so produces bugs, also creates so many warnings that it can make extra work for humans.

AI bug-hunters already produce the same problem, and he expects it will persist.

Herbert-Voss therefore thinks infosec workers will have plenty on their plates for the foreseeable future, and the economic incentive to use AI – someone's got to use services that pay for all those GPUs and datacenters – will act as a forcing function that makes infosec teams adopt AI and as a result improve their proactive and defensive work. ®

More about

TIP US OFF

Send us news