Analyst(s): Mitch Ashley, Fernando Montenegro
Publication Date: April 8, 2026
Anthropic’s Project Glasswing uses Claude Mythos Preview to autonomously find zero-day vulnerabilities in critical software. A coalition of tech leaders joins to put AI capability to work for defense first.
What is Covered in This Article
- Anthropic launched Project Glasswing, a multi-organization initiative using Claude Mythos Preview to identify and remediate zero-day vulnerabilities in critical software, backed by $100M in usage credits and $4M in open-source security donations.
- Claude Mythos Preview autonomously found thousands of zero-day vulnerabilities across every major operating system and browser, including flaws that survived decades of human review and millions of automated test runs.
- AI vulnerability detection has reached a capability threshold that changes the economics of both offensive and defensive security work; controlled access to Mythos Preview buys time for defenders, but the proliferation timeline is measured in months, not years.
- For software engineering and platform teams, the implications extend well beyond security operations: AI-generated code now requires adversarial-grade review at the same velocity it is produced, and only AI-native AppSec tooling operating in the development workflow can close that gap.
- Anthropic’s coalition structure simultaneously includes its cloud distribution partners and their direct security competitors, a positioning dynamic that Glasswing accelerates but does not resolve.
The News: Anthropic launched Project Glasswing on April 7, 2026, announcing an AI vulnerability-detection initiative built around Claude Mythos Preview, an unreleased frontier model purpose-evaluated to find and remediate flaws in critical software. Twelve founding organizations joined: Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorgan Chase, The Linux Foundation, Microsoft, NVIDIA, and Palo Alto Networks. Anthropic committed $100 million in model usage credits to Glasswing participants and donated $4 million to open-source security organizations, including Alpha-Omega, OpenSSF, and the Apache Software Foundation. More than 40 additional organizations responsible for critical software infrastructure have been granted access for first-party and open-source scanning.
Claude Mythos Preview’s AI vulnerability detection capabilities produced concrete pre-announcement findings. According to Anthropic, the model autonomously identified thousands of zero-day vulnerabilities across every major operating system and browser, including a 27-year-old vulnerability in OpenBSD enabling remote crash of any connected machine; a 16-year-old FFmpeg flaw in a code line automated tools had tested five million times without detection; and a chained Linux kernel exploit enabling full privilege escalation from ordinary user access. On the CyberGym cybersecurity benchmark, Mythos Preview scored 83%, while Claude Opus 4.6 scored 67%. Anthropic does not plan general availability for Mythos Preview. Post-preview access for participants will be available at $25/$125 per million input/output tokens via Claude API, Amazon Bedrock, Google Cloud Vertex AI, and Microsoft Foundry.
Anthropic Glasswing: AI Vulnerability Detection Has Crossed a Threshold
Analyst Take: AI vulnerability detection has crossed a threshold that development and security teams have been anticipating and dreading simultaneously.
We’ve been building towards this. From Barton Miller’s first use of ‘fuzzing’ techniques in 1989 to the DARPA AI Cyber Challenge (AIxCC) of 2023-2025, the use of machine learning and artificial intelligence to address security vulnerabilities has seen significant progress.
The major players in AI research have been making their way into cybersecurity, including OpenAI’s Aardvark, Google’s CodeMender/BigSleep project, AWS Security Agent, and Microsoft Security CoPilot, among others.
Anthropic Mythos appears to be a step change. Mythos Preview’s ability to find flaws that survived decades of human review and millions of automated tests – autonomously, without human steering – marks the point where AI capability in security moves from assistant to primary researcher. The question Glasswing leaves open is whether controlled access to a model this capable changes the proliferation timeline by any meaningful margin, or whether it primarily establishes Anthropic’s infrastructure-layer position before competitors arrive at the same threshold.
The Controlled Access Model Buys Time. Not Much.
Anthropic is direct about the core problem: Mythos Preview-class capabilities will eventually proliferate, including potentially to actors not committed to deploying them safely. The Glasswing coalition attempts to establish a defensive advantage ahead of that curve. The logic holds, but the timeline advantage is measured in months. Other frontier model developers are training on similar code corpora, applying similar agentic reasoning frameworks, and publishing research on vulnerability detection. The 16-point gap between Mythos Preview and Opus 4.6 on CyberGym is meaningful today and compressible over the next two release cycles.
What This Means for Software Engineering Teams
AI vulnerability detection at this capability level changes the calculus for development organizations well beyond security operations. Every software team using AI to generate code is already producing output at a velocity that traditional AppSec tooling cannot match. Mythos Preview demonstrates that the same capability profile that accelerates development can now find complex, chained vulnerabilities in that code at comparable speed. That is a software lifecycle problem, not a security team problem in isolation.
The gap between AI code generation velocity and adversarial-grade review capacity has widened structurally. Futurum’s January 2026 Software Lifecycle Engineering Decision-Maker Survey (N=393) found that 78% of CIOs cite governance, compliance, and data security as top barriers to AI adoption in software development. Glasswing provides the concrete evidence base for why those concerns are calibrated correctly. Closing that gap requires AI-native AppSec tooling operating at the same layer as code generation, not bolted on as a downstream step.
The “1% Problem”: The Human Triage Bottleneck
According to Anthropic’s Red Team, less than 1% of the thousands of vulnerabilities Mythos Preview has discovered so far have been fully patched by maintainers. This is not a failure of the model, but a structural breaking point in the software supply chain. When an AI can scan millions of lines of code and autonomously generate valid, high-severity bug reports in minutes, human triage teams—and the slow, coordinated disclosure processes they rely on—become the immediate bottleneck.
For enterprise CISOs and platform engineering leaders, this “1% problem” highlights a critical operational reality: having perfect, AI-generated visibility into your vulnerabilities is functionally useless if your Mean Time to Remediate (MTTR) cannot keep pace with the discovery rate. The ecosystem is facing a looming crisis in which defenders will drown in a backlog of known, unpatched zero-days because human engineers cannot deploy patches fast enough. AI-driven discovery will mandate AI-driven remediation; anything less will leave infrastructure exposed.
The Asymmetric Economics of Exploitation
This release is further evidence that we must fundamentally recalibrate our understanding of the economic barrier to entry for offensive cyber operations. Historically, discovering a deeply buried zero-day vulnerability and chaining together a reliable exploit (such as bypassing sandboxes or executing remote code) required weeks or months of labor by elite, highly compensated security researchers.
Mythos Preview has demonstrated the ability to autonomously perform this same end-to-end process—finding a zero-day and writing a functional exploit—for under $50 in compute costs in some instances. The barrier to entry for elite-level hacking continues to shrink. When the cost of generating a sophisticated, weaponized exploit drops to the price of a tank of gas, the volume and velocity of attacks will inevitably surge. Defenders can no longer rely on the sheer economic friction of exploit development to protect secondary or tertiary attack surfaces.
Anthropic’s Infrastructure-Layer Positioning
The coalition structure in Glasswing warrants examination beyond the security narrative. Establishing Mythos Preview as the capability engine for a group that simultaneously includes AWS, Google, Microsoft, Cisco, and Palo Alto Networks positions Anthropic as an infrastructure-layer provider in AI-native security. Those partners operate significant and competitive positions in endpoint, cloud, and network security. They are also Anthropic’s primary cloud distribution channels. The dynamic is both cooperative and competitive at the same time, and Glasswing does not resolve it; rather, it intensifies the tension.
As Mythos-class capabilities become available at production scale, the delineation between Anthropic’s model and each partner’s security platform will require a cleaner answer. The $100M in usage credits establishes pricing expectations: $25/$125 per million input/output tokens post-preview. For teams running high-volume vulnerability scanning at enterprise scale, that budget line grows quickly. Cost modeling for AI vulnerability detection at production scale is an obligation that security and platform engineering teams should begin now, before access expands.
What to Watch:
- Whether Anthropic’s 90-day public reporting delivers specific vulnerability counts, remediation rates, and disclosure timelines, or defaults to qualitative progress narratives. Concrete numbers would establish a credible evidence baseline; vague summaries would invite the conclusion that Glasswing is more positioning than program.
- How quickly competing frontier models close the CyberGym gap. The 83% vs. 67% spread between Mythos Preview and Opus 4.6 is substantial; the question is how durable that margin is at 6-12 month time scales as OpenAI, Google DeepMind, and specialized security AI vendors advance their own vulnerability-detection capabilities.
- Whether Glasswing partners, particularly CrowdStrike and Palo Alto Networks, begin integrating Mythos Preview visibly into product releases. Product integration would confirm that the coalition is generating commercializable security tooling rather than research access that stays inside partner security operations teams.
- The formation, membership, and government involvement of the proposed independent third-party governance body are identified by Anthropic as a medium-term goal. An institution with real public-sector participation and binding standards would signal that Glasswing seeds durable industry infrastructure; a vendor-led working group would not.
See the complete announcement of Project Glasswing on the Anthropic website.
Declaration of Generative AI and AI-assisted Technologies in the Writing Process: While preparing this work, the author used AI capabilities from both Google Gemini and/or Futurum’s Intelligence Platform to summarize source material and assist with general editing. After using these capabilities, the author reviewed and edited the content as needed. The author takes full responsibility for the publication’s content.
Disclosure: Futurum is a research and advisory firm that engages or has engaged in research, analysis, and advisory services with many technology companies, including those mentioned in this article. The author does not hold any equity positions with any company mentioned in this article.
Analysis and opinions expressed herein are specific to the analyst individually and data and other information that might have been provided for validation, not those of Futurum as a whole.
Other Insights from Futurum:
OpenAI Acquires Promptfoo, Gaining 25% Foothold in Fortune 500 Enterprises
Claude Found 500 Zero-Days. Who Patches Them Before Attackers Arrive?
RSAC 2026 Conference: The AI ‘Tragedy of the Commons’