Microsoft Copilot struggles to discern facts from opinions — posting distorted AI news summaries riddled with inaccuracies: "How long before an AI-distorted headline causes significant real-world harm?"

In this photo illustration, Microsoft Copilot AI logo is seen on a smartphone screen.
An extensive study by BBC reveals the flaws in AI news summaries generated by flagship models, including Gemini, ChatGPT, Microsoft Copilot, and Perplexity AI. (Image credit: Getty Images | SOPA)

BBC recently conducted extensive research on the ability of AI-powered chatbots to summarize news stories, including Microsoft Copilot, OpenAI's ChatGPT, Google's Gemini, and Perplexity. While safety and security continue to be major deterrents to the progression of generative AI, the high propensity of AI tools generating inaccurate or outrightly wrong responses to queries continues to puzzle users.

The outlet leveraged the listed AI tools to summarize news stories and consequently asked them questions based on the content of the summarized news posts. Interestingly, the research indicated that the AI-generated answers included significant inaccuracies and distortions.

Can I trust AI-generated news summaries? Apple already pulled the plug

(Image credit: Tom's Guide)

"We live in troubled times, and how long will it be before an AI-distorted headline causes significant real world harm?" BBC's Turness rhetorically asked.

BBC's research further revealed that Copilot and Gemini featured more complex issues compared to ChatGPT and Perplexity. Further highlighting that the tools "struggled to differentiate between opinion and fact, editorialized, and often failed to include essential context".

Related: Microsoft Copilot is the poster child of 'an annoying kid in class'

It's worth noting that the inaccuracy highlighted in the intricate report cast its net beyond the listed chatbots. As you may know, Apple recently temporarily pulled the plug on its Apple Intelligence notifications after the tool was spotted sharing erroneous headlines, prompting backlash from news organizations and freedom groups.

In conclusion, BBC recommends a "pull back" on AI news summaries, pending a much-needed conversation with AI service providers. "We can work together in partnership to find solutions".

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry at Windows Central. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.