What Happened
In February 2026, Anthropic conducted an intensive security audit of Mozilla Firefox using their Claude Opus 4.6 AI model. Over the span of just two weeks, the AI system identified 22 security-sensitive vulnerabilities, with 14 classified as high-severity issues requiring immediate attention. Mozilla subsequently issued 22 CVEs (Common Vulnerabilities and Exposures) for these security bugs.
The audit wasn’t limited to security issues. Claude also discovered an additional 90 other bugs throughout Firefox’s codebase, demonstrating the AI’s broad capability to identify various types of software defects. The team scanned nearly 6,000 C++ files and submitted a total of 112 unique bug reports during the testing period.
Mozilla has already addressed these vulnerabilities, rolling out fixes in Firefox version 148, which was released on February 24, 2026.
Why It Matters
This partnership represents a significant milestone in cybersecurity automation. The 22 vulnerabilities found by Claude in February 2026 exceeded the number reported in any single month throughout 2025, highlighting the AI’s efficiency in bug detection.
For everyday users, this means Firefox has become more secure through AI-assisted vulnerability discovery. The rapid identification and patching of these security flaws helps protect millions of users from potential cyber attacks before malicious actors can exploit them.
The collaboration also reveals important insights about AI capabilities in cybersecurity. While Claude excelled at finding vulnerabilities, it struggled to create actual exploits. Despite hundreds of attempts and approximately $4,000 in API costs, the AI successfully turned security defects into working exploits in only two cases. This suggests that while AI can efficiently identify weaknesses, creating functional attacks remains more challenging.
Background
Traditionally, security auditing relies on human experts manually reviewing code, which is time-intensive and may miss subtle vulnerabilities. Browser security is particularly critical because web browsers are among the most targeted applications, serving as gateways to users’ personal and professional data.
Mozilla has long prioritized security in Firefox development, but the scale and complexity of modern browser codebases make comprehensive manual auditing increasingly difficult. Firefox contains millions of lines of code across multiple programming languages, making it challenging for human auditors to examine every potential vulnerability.
Anthropic’s Claude Opus 4.6 represents the latest generation of large language models specifically trained on code analysis and security research. Unlike traditional static analysis tools that follow predefined rules, AI models can identify subtle patterns and complex vulnerability chains that might escape conventional detection methods.
What’s Next
This successful partnership likely signals a broader shift toward AI-assisted security auditing across the software industry. Other browser makers, including Google Chrome and Microsoft Edge, may adopt similar AI-powered security testing approaches.
For cybersecurity professionals, this development suggests that AI tools will become essential complements to human expertise rather than replacements. Security teams may need to integrate AI auditing capabilities into their workflows while maintaining human oversight for critical decision-making.
The cost-effectiveness of AI vulnerability discovery—spending $4,000 to find 22 serious security flaws—could make comprehensive security auditing more accessible to smaller software companies that previously couldn’t afford extensive manual testing.
However, this advancement also raises concerns about the cybersecurity arms race. If AI can efficiently discover vulnerabilities, malicious actors may also use similar tools to find and exploit security flaws before they’re patched. This underscores the importance of responsible AI development and rapid vulnerability disclosure practices.