AI Researcher: Claude Outperformed Me at Finding Security Flaws

What Happened In an unprecedented demonstration at a cybersecurity conference in March 2026, Nicolas Carlini, a Research Scientist at Anthropic, showed Claude AI discovering zero-day vulnerabilities in real-time. The AI successfully identified: A blind SQL injection vulnerability in Ghost CMS (CVE-2026-26980) that allowed complete admin database compromise A complex stack buffer overflow in the Linux kernel’s NFSv4 daemon that had existed undetected since 2003 Multiple smart contract vulnerabilities worth millions in simulated funds Carlini, who has published extensively on AI safety and adversarial machine learning, admitted during the presentation that Claude’s vulnerability discovery capabilities now exceed those of expert human researchers.

Read more →

Anthropic Sues Pentagon Over AI Warfare Restrictions

What Happened The conflict erupted when Anthropic CEO Dario Amodei refused to back down from restrictions on how the Pentagon could use Claude AI systems, particularly regarding autonomous weapons and mass surveillance capabilities. Defense Secretary Pete Hegseth responded by labeling Anthropic a “Supply-Chain Risk to National Security” on March 5, 2026, effectively blocking federal agencies and contractors from doing business with the company. The designation came after heated negotiations over Anthropic’s role in President Trump’s “Golden Dome” missile defense program, which aims to deploy U.

Read more →

Anthropic Exposes Massive AI Theft: Chinese Firms Used 24K Fake Accounts

What Happened Anthropic discovered that DeepSeek, MiniMax, and Moonshot AI had created thousands of fake accounts to systematically extract knowledge from its Claude AI model. The scheme involved more than 16 million exchanges with Claude across 24,000 fraudulent accounts, representing one of the largest known cases of AI model theft. The technique, called “distillation,” involves using responses from an advanced AI model to train a smaller, more efficient version. While distillation is a legitimate research method when done with permission, Anthropic says these companies violated its terms of service by conducting the practice without authorization and at massive scale.

Read more →