Sensor Tower data show a sudden, sharp user backlash against OpenAI following the February 28 announcement that the company had reached an agreement with the U.S. Department of Defense. On that day U.S. uninstalls of the ChatGPT mobile app jumped 295% compared with the previous day, downloads fell 13% and then a further 5% on March 1, and one‑star ratings spiked 775% on February 28.
The reaction coincided with an immediate beneficiary: Anthropic’s Claude saw downloads in the U.S. rise 37% on February 27 and 51% on February 28, and climbed to the top of the U.S. App Store on February 28, a position it held through March 2. The near‑instant reallocation of consumer attention underscores how fast public sentiment can alter the competitive landscape in consumer AI.
The episode reflects a deeper tension between commercial AI firms and public concerns about the military uses of their technology. For years AI researchers, civil society groups and some employees inside tech firms have warned that general‑purpose models could be repurposed for surveillance, autonomy in weapons, or other military applications. OpenAI’s deal with the Pentagon — framed by some as defensive or safety‑oriented — has nevertheless touched a raw nerve among users who view military partnerships as morally or politically unacceptable.
The metrics — simultaneous spikes in uninstalls and one‑star reviews alongside falling downloads — suggest a mix of protest and practical switching. Download declines and the sustained top ranking for Claude indicate many users did not merely vent on app stores but moved to a competitor. At the same time, app ratings are easy to weaponise: coordinated campaigns can amplify apparent outrage, complicating interpretation of the raw numbers.
For AI firms, the incident is a reminder that brand trust is a strategic asset. Consumer AI products occupy a fraught space where corporate decisions about contracts, partnerships and product roadmaps are judged not only by investors and regulators but by a politically aware user base. A reputation hit in one market can cascade into others, affecting adoption by enterprises, regulators’ willingness to grant approvals, and partners’ appetite for collaboration.
The broader geopolitical context matters too. In an era of intensifying U.S.–China rivalry, Western AI companies’ ties to national security agencies are watched closely abroad and can complicate international expansion. Firms that pursue defence work risk being framed as instruments of state power, which can provoke consumer and government pushback in markets sensitive to foreign influence.
Moving forward, expect firms to sharpen their public messaging, offer clearer product segregation between civilian and defence lines, or introduce governance mechanisms to shield consumer brands from military associations. Competitors such as Anthropic can capitalise on principled positioning, but they also face pressure to say how they would respond if asked to assist military customers.
Ultimately this episode underlines that the politics of artificial intelligence now plays out in app stores as well as on Capitol Hill. Commercial success will increasingly depend on companies’ ability to navigate ethical expectations, regulatory scrutiny and geopolitical risk without alienating the users whose engagement powers their platforms.
