Attorneys General demand Microsoft and other AI labs fix “delusional outputs” — warning that AI hallucinations may be illegal

UKRAINE - 2024/12/29: In this photo illustration, the artificial intelligence (AI) apps of Le Chat by Mistral AI, DeepSeek, ChatGPT, Google Gemini, Copilot and Claude AI by Anthropic are seen on a smartphone screen. (Photo Illustration by Pavlo Gonchar/SOPA Images/LightRocket via Getty Images)
(Image credit: Getty Images)
Disclaimer

Enjoy our content? Make sure to set Windows Central as a preferred source in Google Search, and find out why you should so that you can stay up-to-date on the latest news, reviews, features, and more.

Big tech corporations are racing to hop onto the AI bandwagon, investing billions into the ever-evolving technology. However, market analysts and investors have raised concerns about the exorbitant spending on the technology with no clear path to profitability amid claims and predictions that we're in an AI bubble that's on the precipice of bursting.

Late last week, a group of state attorneys general, joined by dozens of AGs from U.S. states and territories through the National Association of Attorneys General, sent a letter to leading AI labs warning them to address “delusional outputs.” The letter cautioned that failure to remedy this issue could constitute a violation of state law and expose the companies to legal consequences (via TechCrunch).

The letter demands that companies implement elaborate measures and safeguards designed to protect users, including transparent third-party audits of LLMs to foster early identification of delusions of syciphancy. Additionally, the letter demands new incident reporting procedures that will notify users when AI-powered chatbots generate harmful content.

This news comes amid a rise in the number of suicide incidents related to AI. A family sued OpenAI, claiming that ChatGPT encouraged their son to commit suicide. Consequently, the AI firm integrated parental controls into ChatGPT's user experience to mitigate the issue.

Perhaps more importantly, the letter dictates that the safeguards should also allow academic and civil society groups to “evaluate systems pre-release without retaliation and to publish their findings without prior approval from the company".

According to the letter:

"GenAI has the potential to change how the world works in a positive way. But it also has caused—and has the potential to cause—serious harm, especially to vulnerable populations. In many of these incidents, the GenAI products generated sycophantic and delusional outputs that either encouraged users’ delusions or assured users that they were not delusional.”

Finally, the letter suggested that AI labs should treat mental health incidents the same way tech corporations handle cybersecurity incidents. It'll be interesting to see if research AI labs like OpenAI adopt some of these suggestions, especially after a recent damning report claimed that the company is being less than truthful about its research, only publishing findings that shine a bright light on its tech.


Click to follow Windows Central on Google News

Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!


Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.