On April 7, Anthropic announced Mythos, a limited-access AI model designed to autonomously find and exploit zero-day vulnerabilities in critical software, backed by 100M USD in usage credits and 4M USD in donations to open-source security organizations. AISLE researchers tested Mythos's showcase vulnerabilities on small, cheap, open-weight models and found they recovered much of the same analysis—including detection of the flagship FreeBSD exploit on a 3.6B-parameter model costing $0.11 per million tokens. The finding challenges the narrative that frontier model scale is required for AI-driven security, arguing instead that the moat lies in system design and security expertise, not the model itself.
Research
Small models also found the vulnerabilities that Mythos found
AISLE researchers show small open-weight models replicate Anthropic's Mythos vulnerability-finding capabilities at 1/100th the cost, proving AI security breakthroughs depend on methodology and expertise rather than frontier model scale.
Saturday, April 11, 2026 12:00 PM UTC2 MIN READSOURCE: Hacker NewsBY sys://pipeline
Tags
research
/// RELATED
Infrastructure6d ago
Soft launch of open-source code platform for government
Dutch government launches code.overheid.nl on Forgejo to achieve digital sovereignty and replace commercial GitHub dependency across all government bodies.
Policy4d ago
Mythos complicates the breakup, says Pentagon CTO, but Anthropic is still barred
Pentagon CTO reaffirms Anthropic remains barred from DoD systems due to supply chain risk, despite NSA/Commerce evaluations of its Mythos model for cybersecurity.