Google shifts blame for erroneous AI Overviews spectacle to a 'data void' of contextual topics coupled with faked screenshots, but still makes "a dozen technical improvements"

Google Chrome on PC
(Image credit: Future)

What you need to know

  • Last week, Google's Overview AI feature was spotted generating misleading responses, including recommending eating rocks and glue.
  • The company says a data void or information gap on particular topics on the web heavily contributed to instances where the feature generates misleading search results.
  • Google has improved the tool's user experience with better detection mechanisms for nonsensical queries that shouldn’t show an AI Overview.

Every major tech corporation is quickly hopping into the great AI race. But mostly forgotten or ignored, pace doesn't necessarily equate to perfect execution. Microsoft has predominantly had a smooth sail with the technology after making a multi-billion investment in OpenAI technology. Its success in the category is well outlined in its latest earnings call report, and it's now the world's most valuable company ahead of Apple with over $3 trillion in market capitalization. 

Unfortunately, Google's blatant attempt to chase Microsoft's footsteps and success in AI has seemingly flopped if last week's AI Overviews spectacle is anything to go by. The feature was spotted bizarrely recommending eating rocks, and glue, and potentially even committing suicide, despite having recently acquired exclusive rights to Reddit content to power its AI.

Did Google AI Overviews go insane?

(Image credit: Windows Central | Microsoft Copilot)

Google recently published a new blog post highlighting what happened to the AI feature and contributed to it generating misleading information and recommendations for queries. Head of Google Search Liz Reid indicated AI Overviews are different from chatbots or LLM products that are broadly available and create responses based on training data. Instead, the feature is powered by a customized language model integrated with Google's web ranking systems. This way, the feature presents users with well-curated and high-quality search results, including relevant links.

According to the Google Search lead:

"AI Overviews generally don't “hallucinate” or make things up in the ways that other LLM products might. When AI Overviews get it wrong, it’s usually for other reasons: misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available."

While addressing the erroneous and "insane" search results from the AI Overviews feature, Reid stated the feature is optimized for accuracy and was run through extensive tests, including robust red-teaming efforts, before it was shipped. The Search lead also indicated that the team had spotted "nonsensical" searches potentially aimed at creating erroneous results.

Reid also indicated that some screenshots broadly shared across social media platforms were fabricated, causing users to think that Google had returned dangerous results for topics like smoking while pregnant. The company claims this isn't the case and recommends users try running such searches using the tool to confirm.

Google admitted that the feature provided inaccurate AI Overviews for some searches. Interestingly, Google claims "these were generally for queries that people don’t commonly do." The company further indicated that before screenshots of people using the feature to find out “How many rocks should I eat?” went viral, practically no one asked that question.

Additionally, Google says there's limited quality content covering such topics while referring to the phenomenon as a data void. "In other examples, we saw AI Overviews that featured sarcastic or troll-y content from discussion forums. Forums are often a great source of authentic, first-hand information, but in some cases can lead to less-than-helpful advice, like using glue to get cheese to stick to pizza."

What's Google doing to address these critical issues?

(Image credit: Kevin Okemwa | Bing Image Creator)

Google has highlighted several measures that won't necessarily fix queries one by one but will address broad sets of queries via updates and "a dozen technical improvements" to its core search systems, including:

  • Google has put better detection mechanisms for nonsensical queries that shouldn’t show an AI Overview and limited the inclusion of satire and humor content in search results.
  • It's also limiting use of user-generated content as part of responses to queries to promote quality search results.
  • Google is triggering restrictions for queries where AI Overviews were deemed unhelpful.
  • The company indicates it already has strong guardrails for news and health. 

Lastly, Google has also indicated that it will keep track of the feedback based on the tool's user experience and external reports to inform its decisions on how to improve its experience and more. 

Kevin Okemwa

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. You'll also catch him occasionally contributing at iMore about Apple and AI. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.