ChatGPT's "hallucination episodes" continue to taunt OpenAI and might set it back by millions, "but it doesn't seem to care."

OpenAI and ChatGPT
(Image credit: Daniel Rubino)

What you need to know

  • OpenAI is under scrutiny after ChatGPT shared incorrect information about people.
  • EU-based privacy nonprofit organization NOYB filed the complaint against the firm, citing it was aware of its inability to correct the false information generated by ChatGPT but didn't seem to care. 
  • The organization wants the regulator to investigate OpenAI's operations and force it to provide detailed information about how it uses people's data.

OpenAI is under scrutiny once again after EU-based privacy nonprofit organization NOYB filed a new complaint with the Austrian Data Protection Authority (DPA), citing ChatGPT's "hallucination episodes" and its inability to correct misleading AI-generated information about people (via Quartz). 

As you may know, chatbot hallucination isn't a new phenomenon in the AI landscape. We witnessed some of these episodes during Microsoft Copilot's (formerly Bing Chat) launch, leading users to believe the technology is getting dumber.

NOYB's complaint builds on the EU's General Data Protection Regulation (GDPR), which controls and regulates how tools like ChatGPT handle the personal data it collects from people. The organization indicated that OpenAI openly admitted that it cannot correct inaccurate information generated using its ChatGPT tool. 

It also pointed out that OpenAI can't explain where ChatGPT obtains its from or the personal data it collects and stores from users. Strangely, the organization indicated that the ChatGPT-maker was aware of this issue but didn't seem to care.

The GDPR allows users based in the EU to request amendments to inaccurate information about them generated by the tool. As such, OpenAI can't satisfy this right, making it non-compliant, which is NOYB's basis for the complaint. 

According to NOYB's data protection lawyer, Maartje de Graaf:

"The obligation to comply with access requests applies to all companies. It is clearly possible to keep records of training data that was used at least have an idea about the sources of information. It seems that with each 'innovation', another group of companies thinks that its products don't have to comply with the law."

What's the plausible way forward for OpenAI?

ChatGPT and Microsoft Logo

(Image credit: Daniel Rubino)

OpenAI's Sam Altman admitted that developing tools like ChatGPT without copyrighted content is impossible. And as you might already know, Microsoft and OpenAI are in several legal battles over copyright infringement issues

Without access to copyrighted information, AI-generated responses to queries aren't as detailed and accurate. The organization states OpenAI's actions are "unacceptable" because it manages ChatGPT yet can't correct inaccurate information generated by the tool or explain where it gets its data. 

NOYB wants the DPA to investigate OpenAI's operations, including how it processes data and trains its models. Additionally, the organization wants the DPA to force OpenAI to allow users to access data, painting a clear picture of the personal data the company collects from users and more.

If OpenAI fails to meet these demands, it could be penalized a hefty fee of 20 million euros or 4% of global annual turnover. Not forgetting any additional costs should affected users choose to seek compensation for damages.

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.