"Cancel ChatGPT" movement goes mainstream after OpenAI closes deal with U.S. Department of War — as Anthropic refuses to surveil American citizens
After Anthropic refused flat out to agree to apply Claude AI to autonomous weapons and mass surveillance of American citizens, OpenAI jumps right into bed with the United States Department of War.
All the latest news, reviews, and guides for Windows and Xbox diehards.
You are now subscribed
Your newsletter sign-up was successful
There are no virtuous participants in the artificial intelligence race, but if there was, it might've been Anthropic.
Large language model tech is built on mountains of stolen data. The entire summation of decades of the open internet was downloaded and converted by billionaires into tech that threatens to destroy billions of jobs, end the global economy, and potentially the human race. But hey, at least in the short term, shareholders (might) make a stack of cash.
There are no moral leaders in this space, sadly. But at the very least, Anthropic of Claude fame took a strong stand this week against the United States government, to the ire of the Trump administration.
Anthropic was designated a supply chain risk this week, and summarily and forcibly banned from use in U.S. governmental agencies. Why? Anthropic said in a blog post it revolved around their two major red lines — no Claude AI for use in autonomous weapons, or mass surveillance of United States citizens.
It's not unexpected that mainstream governments of any stripe would be salivating at the thought of turbo-charged AI mass surveillance, but it is unexpected that a big tech corp like Anthropic would be willing to take such a strong stance against it in an era increasingly devoid of administrative morality. But hey, there's always someone willing to race to the metaphorical moral abyss in the name of money.
Hi, Sam Altman.
OpenAI CEO Sam Altman and part time supervillain thankfully stepped in to bail out the U.S. Department of War, pledging ChatGPT and other OpenAI technologies to the cause.
All the latest news, reviews, and guides for Windows and Xbox diehards.
In a post on X, Altman claimed that OpenAI's models would not be used for mass surveillance, but that claim was immediately contradicted by a U.S. government official, who said that OpenAI's models would be used for "all lawful means." Mass surveillance of American citizens is lawful in "some scenarios" as part of the post-9/11 U.S. Patriot Act, which permits mass harvesting of communications meta data, even if some aspects of it have been curtailed in recent years.
Today was a CRAZY day in the AI space.Morning - Anthropic CEO Dario Amodei refused to work with the Pentagon because they wanted to use Claude for mass surveillance and autonomous killer robots.Afternoon - OpenAI’s Sam Altman came out in support saying, “For all the… pic.twitter.com/K4IZDjQQ3qFebruary 28, 2026
Anthropic wanted control over the way its technologies would be used, as opposed to relying on the interpretation of laws and legal frame works that even now have been the subject of debate and lawsuits. Altman by comparison is happy to let the U.S. government decide how OpenAI's systems are deployed, which under certain segments of the Patriot Act could quite easily lead to the mass surveillance of U.S. citizens, directly or incidentally as part of provisions on surveiling foreign citizens (which, by the way, is completely legal under U.S. law.)
The move has sparked immediate backlash on ChatGPT and OpenAI communities online, across threads with thousands of upvotes on reddit of users claiming to be unsubscribing.
You're now training a war machine. Let's see proof of cancellation. from r/ChatGPT
Time to cancel ChatGPT Plus after three Years. Anthropic got nuked for having ethics, and Sam Altman instantly swooped in for the Pentagon bag. from r/OpenAI
Let me get this straight:Anthropic refused to work with DoW unless they could promise their tech wasn't used for surveillance or killing. DoW said that they need full capabilities. Anthropic declined to give full access. OpenAI stood by Anthropic for ensuring AI safety.…February 28, 2026
ChatGPT recently closed a funding round valuing the company at a frankly absurd $730 billion, with backers including Amazon, Softbank, and NVIDIA. Microsoft professed that it will continue to work with OpenAI, despite saying in an FT interview recently that it would begin building and deploying its own models.
Unfortunately, there aren't many other AI companies willing to take a stance against mass surveillance or autonomous weapons. Google removed an explicit ban on the technology last year from its internal rules. Microsoft is cool with autonomous weapons too, as long as a human pulls the final trigger. Amazon has no prohibitions whatsoever besides vague "responsible use" language, and Meta hasn't been shy about courting Pentagon military contracts either. And we all know Palantir is totally for it.
The genie is out of the bottle, so to speak. ChatGPT is great at textual human mimicry but even the most cutting edge models often fail hilariously at even the most basic child-like logic puzzles.
Are you looking forward to a world where these hallucination-prone, easily-manipulated artificial intelligence models might eventually decide whether or not you're a threat to national security?
As long as Sam Altman and his buddies can stay rich, they don't seem to give much of a fuck about it — or you.
Join us on Reddit at r/WindowsCentral to share your insights and discuss our latest news, reviews, and more.

Jez Corden is the Executive Editor at Windows Central, focusing primarily on all things Xbox and gaming. Jez is known for breaking exclusive news and analysis as relates to the Microsoft ecosystem — while being powered by tea. Follow on X.com/JezCorden and tune in to the XB2 Podcast, all about, you guessed it, Xbox!
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.
