Teens Sue Elon Musk's xAI After Grok Creates Sexual Deepfakes

What Happened The lawsuit, filed in federal court, involves three plaintiffs—two current minors and one adult who was underage when the alleged incidents occurred. According to court documents, one victim, identified as “Jane Doe 1,” discovered in December 2025 that explicit AI-generated images of herself had been created and distributed without her knowledge or consent. The lawsuit specifically targets xAI’s “spicy mode” feature in Grok, which was designed to generate more provocative content than the standard version.

Read more →

Why AI Companies Are Now Racing to Build Weapons (After Swearing They Never Would)

The $23 Billion Question That’s Reshaping AI The standoff between Anthropic and the Pentagon isn’t just another tech news story. It’s a seismic shift that reveals how quickly principles can crumble when national security—and massive profits—are at stake. Here’s what’s happening: Anthropic, the AI safety company that built Claude (ChatGPT’s main rival), is now in heated negotiations with the Department of Defense. The same company that positioned itself as the “ethical AI” alternative is being pulled into the military-industrial complex.

Read more →

US Military Uses Anthropic's Claude AI in Iran Strikes

What Happened U.S. forces used Claude AI technology to assist in striking over 1,000 targets in the first 24 hours of military operations against Iran, according to defense sources. The AI system was integrated through data analytics company Palantir’s platform and helped military analysts sort through intelligence data, identify potential targets, and simulate battle scenarios. The timing proved controversial: President Trump announced on Friday that federal agencies must stop using Anthropic’s technology within six months, with Defense Secretary Pete Hegseth declaring the company a “supply chain risk.

Read more →

OpenAI Robotics Chief Quits Over Pentagon AI Deal Ethics

What Happened Caitlin Kalinowski, who served as OpenAI’s Head of Robotics for just four months, submitted her resignation following the company’s controversial agreement with the Pentagon. The deal permits OpenAI’s artificial intelligence systems to be integrated into classified military networks, raising significant ethical questions about surveillance and autonomous weapons development. In her resignation statement, Kalinowski specifically criticized the lack of oversight in military surveillance applications and the potential for lethal autonomous systems to operate without human authorization.

Read more →

ChatGPT Uninstalls Surge 295% After OpenAI Pentagon Deal

What Happened OpenAI announced a partnership with the U.S. Department of Defense in late February 2026, sparking immediate consumer backlash that translated into concrete user action. Mobile app analytics from Sensor Tower revealed dramatic shifts in user behavior: ChatGPT uninstalls spiked 295% day-over-day on Saturday, February 28 Downloads dropped 13% as negative sentiment spread One-star reviews surged 775% on Saturday, then grew another 100% on Sunday Five-star ratings dropped by half during the same period The user revolt wasn’t just symbolic—it created measurable market shifts.

Read more →

Trump Bans Anthropic AI After Company Refuses Weapons Use

What Happened On February 27, 2026, President Trump issued an executive order directing U.S. government agencies to “immediately cease” using technology from Anthropic, one of the world’s leading AI companies. The order includes a six-month phase-out period specifically for the Defense Department, which has been using Anthropic’s products “at various levels.” The conflict centers on Anthropic’s refusal to comply with Pentagon demands for unrestricted access to the company’s AI models. Anthropic has maintained strict ethical guidelines, requiring assurances that its technology will not be used for fully autonomous weapons systems or mass domestic surveillance of American citizens.

Read more →

Anthropic Revamps AI Safety Policy Amid Industry Pressure

What Happened Anthropic unveiled Version 3.0 of its Responsible Scaling Policy (RSP), marking the most significant revision to the company’s safety framework since its inception. The update introduces a crucial distinction between what Anthropic commits to do internally versus what it believes the entire AI industry should adopt. Under the previous RSP, Anthropic committed to implementing safety mitigations that would reduce their models’ absolute risk levels to acceptable standards, regardless of competitors’ actions.

Read more →