What Happened

OpenAI announced a partnership with the U.S. Department of Defense in late February 2026, sparking immediate consumer backlash that translated into concrete user action. Mobile app analytics from Sensor Tower revealed dramatic shifts in user behavior:

  • ChatGPT uninstalls spiked 295% day-over-day on Saturday, February 28
  • Downloads dropped 13% as negative sentiment spread
  • One-star reviews surged 775% on Saturday, then grew another 100% on Sunday
  • Five-star ratings dropped by half during the same period

The user revolt wasn’t just symbolic—it created measurable market shifts. Anthropic’s Claude app, OpenAI’s main competitor, jumped 37% in downloads on Friday and 51% on Saturday after announcing it would not partner with the defense department. Claude reached #1 on the U.S. App Store and maintained that position through Monday, March 2.

Why It Matters

This represents the first major consumer revolt against an AI company’s military partnerships, demonstrating that users care deeply about how their everyday AI tools are used beyond their personal interactions. The 295% uninstall surge—30 times higher than normal—shows consumers are willing to vote with their downloads when AI companies make decisions they disagree with.

The backlash reveals a fundamental tension in the AI industry between commercial growth, government partnerships, and user values. For millions of people, ChatGPT isn’t just a business tool—it’s a daily companion for writing, learning, and problem-solving. Users appear to view their choice of AI assistant as having ethical implications.

Background

The controversy centers on concerns about AI being used for military applications, particularly autonomous weapons and domestic surveillance. Anthropic explicitly cited these concerns when declining the Pentagon deal, stating it “was not able to agree on the deal terms over concerns that AI would be used to surveil Americans and be used in fully autonomous weaponry.”

This isn’t the first time AI companies have faced scrutiny over defense contracts. Google faced employee protests in 2018 over Project Maven, a Pentagon AI program, ultimately deciding not to renew the contract. However, the ChatGPT backlash marks the first time consumers—not just employees—have organized a mass response.

OpenAI CEO Sam Altman responded to the controversy by emphasizing that the company’s AI should not be used for domestic surveillance of Americans, though specific details about the Pentagon partnership’s scope remain limited.

What’s Next

The user exodus could force OpenAI to reconsider its approach to government partnerships or provide more transparency about how its technology will be used. The company faces a delicate balance between lucrative government contracts and maintaining user trust in a competitive market.

For competitors like Anthropic, the controversy presents an opportunity to differentiate through ethical positioning. Claude’s surge to #1 on the App Store demonstrates that taking public stances on AI ethics can translate into market advantage.

The broader AI industry is now watching to see whether other companies will follow Anthropic’s lead in rejecting defense contracts, or whether OpenAI’s approach will become the norm. Consumer sentiment appears to be a new factor that AI companies must consider alongside technical capabilities and business opportunities.

This incident may establish a precedent where AI companies’ partnerships and ethical positions become key competitive differentiators, not just their technical performance.