"Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens

Sam Altman, chief executive officer of OpenAI Inc., during the Federal Reserve Integrated Review of the Capital Framework for Large Banks Conference in Washington, DC, US, on Tuesday, July 22, 2025.
Is Sam Altman the most evil man in tech today? (Image credit: Getty Images | Bloomberg)
Recent updates

UPDATE (March 1, 2026): I've updated this article with comments from OpenAI CEO Sam Altman towards the end of the piece.

There are no virtuous participants in the artificial intelligence race, but if there was, it might've been Anthropic.

There are no moral leaders in this space, sadly. But at the very least, Anthropic of Claude fame took a strong stand this week against the United States government, to the ire of the Trump administration.

Anthropic was designated a supply chain risk this week, and summarily and forcibly banned from use in U.S. governmental agencies. Why? Anthropic said in a blog post it revolved around their two major red lines — no Claude AI for use in autonomous weapons, or mass surveillance of United States citizens.

It's not unexpected that mainstream governments of any stripe would be salivating at the thought of turbo-charged AI mass surveillance, but it is unexpected that a big tech corp like Anthropic would be willing to take such a strong stance against it in an era increasingly devoid of administrative morality. But hey, there's always someone willing to race to the metaphorical moral abyss in the name of money.

Hi, Sam Altman.

Sam Altman looking a bit spooky

Sam Altman commits OpenAI to the U.S. Department of War, sidestepping Anthropic's "red lines." (Image credit: Getty Images | NurPhoto / Edit: Windows Central)

OpenAI CEO Sam Altman and part time supervillain thankfully stepped in to bail out the U.S. Department of War, pledging ChatGPT and other OpenAI technologies to the cause.

In a post on X, Altman claimed that OpenAI's models would not be used for mass surveillance, but that claim was immediately contradicted by a U.S. government official, who said that OpenAI's models would be used for "all lawful means." Mass surveillance of American citizens is lawful in "some scenarios" as part of the post-9/11 U.S. Patriot Act, which permits mass harvesting of communications meta data, even if some aspects of it have been curtailed in recent years.

Anthropic wanted control over the way its technologies would be used, as opposed to relying on the interpretation of laws and legal frame works that even now have been the subject of debate and lawsuits. Altman by comparison is happy to let the U.S. government decide how OpenAI's systems are deployed, which under certain segments of the Patriot Act could quite easily lead to the mass surveillance of U.S. citizens, directly or incidentally as part of provisions on surveiling foreign citizens (which, by the way, is completely legal under U.S. law.)

The move has sparked immediate backlash on ChatGPT and OpenAI communities online, across threads with thousands of upvotes on reddit of users claiming to be unsubscribing.

You're now training a war machine. Let's see proof of cancellation. from r/ChatGPT
Time to cancel ChatGPT Plus after three Years. Anthropic got nuked for having ethics, and Sam Altman instantly swooped in for the Pentagon bag. from r/OpenAI

ChatGPT recently closed a funding round valuing the company at a frankly absurd $730 billion, with backers including Amazon, Softbank, and NVIDIA. Microsoft professed that it will continue to work with OpenAI, despite saying in an FT interview recently that it would begin building and deploying its own models.

Unfortunately, there aren't many other AI companies willing to take a stance against mass surveillance or autonomous weapons. Google removed an explicit ban on the technology last year from its internal rules. Microsoft is cool with autonomous weapons too, as long as a human pulls the final trigger. Amazon has no prohibitions whatsoever besides vague "responsible use" language, and Meta hasn't been shy about courting Pentagon military contracts either. And we all know Palantir is totally for it.

The genie is out of the bottle, so to speak. ChatGPT is great at textual human mimicry but even the most cutting edge models often fail hilariously at even the most basic child-like logic puzzles.

Are you looking forward to a world where these hallucination-prone, easily-manipulated artificial intelligence models might eventually decide whether or not you're a threat to national security?

As long as Sam Altman and his buddies can stay rich, they don't seem to give much of a fuck about it — or you.

Recent updates

UPDATE (March 1, 2026): Added some comments below from OpenAI CEO Sam Altman on its shift towards supporting the United States Department of War.

Since writing this, OpenAI and Sam Altman have been on a damage control mission.

In an "AMA" style Q&A session on X, Sam Altman claimed that the United States "Department of War" would respect OpenAI's stated "red lines" for not using AI tech for autonomous weapons or mass surveillance of United States citizens, although remained largely vague about how these safeguards would be implemented and maintained.

He suggested that existing U.S. law protects against these situations by default, although legal experts have warned that surveillance of non-U.S. citizens permits the collection of data on U.S. citizens in an indirect or incidental way.

People aren't exactly buying it. It makes little sense for the Trump administration to come out so strongly against Anthropic's stated position, while leaping head first into supporting OpenAI's. The core contention seems to be that OpenAI is happy to let the U.S. Department of War interpret what constitutes "legal," while Anthropic wants to maintain full control over how its technology is used.

It seems as though Altman is relying purely on hopes and prayers that its technology won't be used for nefarious means — which seems either naïve at best, and dishonest at worst. The current U.S. administration has shown willing to at the very least stretch definitions and precedents outlined in the U.S. constitution and across historical landmark legal rulings. I'm not sure why there's any reason to expect OpenAI's tech wouldn't be co-opted under the guise of "national security," an abuse of power that governmental institutions of all stripes have abused in the past and present.

Since this article, Anthropic's Claude AI app has claimed the #1 top spot over ChatGPT on both Android and iOS. Claude AI is also available for Windows 11.


Click to join us on r/WindowsCentral

Join us on Reddit at r/WindowsCentral to share your insights and discuss our latest news, reviews, and more.


Jez Corden
Executive Editor

Jez Corden is the Executive Editor at Windows Central, focusing primarily on all things Xbox and gaming. Jez is known for breaking exclusive news and analysis as relates to the Microsoft ecosystem — while being powered by tea. Follow on X.com/JezCorden and tune in to the XB2 Podcast, all about, you guessed it, Xbox!

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.