Microsoft Copilot deliriously generated deepfake press statements regarding Russian de facto opposition leader's recent passing, potentially placing two world leaders at loggerheads

Microsoft Copilot
(Image credit: Windows Central | Jez Corden)

What you need to know

  • Microsoft Copilot was recently spotted generating fake press statements regarding Russian opposition leader Alexei Navalny's recent demise and linked them to President Biden and President Putin.
  • It has now come to light that neither President Biden nor President Putin made any of these statements.
  • Microsoft indicated that it has investigated the matter and is currently making changes to enhance and improve the quality of responses generated by the chatbot. 

While generative AI continues to achieve amazing feats across various landscapes and sectors, the technology encounters its fair share of challenges. In the past few months, the number of reports lodged citing instances where the technology was used to generate deepfakes and spread misinformation is alarmingly high.

Microsoft Copilot was recently spotted generating false statements about the death of Russian opposition leader, Alexei Navalny. The chatbot erroneously linked the statement to Russian President Vladimir Putin and US President Joe Biden (via Sherwood Media).

See more

This happened while a journalist at Sherwood Media was trying to leverage Microsoft Copilot's text-generation capabilities for a news article about Navalny’s death. Part of the output generated by the tool featured a press statement supposedly issued by President Biden which blamed President Putin for Navalny’s death. The generated content also featured a rebuttal from Putin which categorically distanced his involvement in the matter and rubbished the statement as “baseless and politically motivated.” 

Here's part of the statement purportedly issued by President Putin as shared by Microsoft Copilot:

"Vladimir Putin, the Russian President, responded to Joe Biden's statement regarding Aleksei Navalny's death with a mix of defensiveness and dismissal. In his official address, Putin emphasized that Navalny was a 'criminal' who had violated Russian law and was rightfully imprisoned. He accused the West of using Navalny as a political pawn to undermine Russia's sovereignty and stability. Putin's tone was firm, and he avoided directly addressing the allegations of responsibility for Navalny's demise."

President Vladimir Putin is yet to make a statement addressing Navalny's death.

For context, Alexei Navalny was a renowned lawyer and de facto opposition leader before his demise on February 16, 2024. Leading up to his demise, Navalny was behind bars serving a 19-year sentence for extremism-related charges and another 11-and-a-half-year sentence for fraud. The news of his passing sparked protests across Russia.

Misinformation and deepfakes are becoming all too familiar with AI

(Image credit: Kevin Okemwa | Bing Image Creator)

While speaking to Sherwood Media, a Microsoft spokesman confirmed that the company has investigated the issue and is currently making changes to improve the quality of Copilot's AI-generated responses to queries. 

"We have investigated this report and are making changes to refine the quality of our responses. As we continue to improve the experience, we encourage people to use their best judgment when viewing results, including verifying source materials and checking web links to learn more."

This isn't the first time we've witnessed such an occurrence. Last year, Microsoft Copilot (formerly Bing Chat) was spotted furnishing voters with false information regarding the forthcoming elections. Researchers indicated the issue was systemic. Users in Germany and Switzerland encountered a similar issue while trying to learn more about the election process in their respective countries.

Microsoft has mapped out its plan to protect the impending election's integrity from AI deepfakes. It intends to empower voters with authoritative and factual election news on Bing ahead of the poll.

Elsewhere, the issue of deepfakes surfacing online continues to be a menace. In January, viral images of pop star Taylor Swift believed to have been generated using Microsoft Designer surfaced online. Consequently, Microsoft shipped a new update to the image generation tool with new guardrails blocking nudity-based image generation prompts.

Kevin Okemwa
Contributor

Kevin Okemwa is a seasoned tech journalist based in Nairobi, Kenya with lots of experience covering the latest trends and developments in the industry. With a passion for innovation and a keen eye for detail, he has written for leading publications such as OnMSFT, MakeUseOf, and Windows Report, providing insightful analysis and breaking news on everything revolving around the Microsoft ecosystem. While AFK and not busy following the ever-emerging trends in tech, you can find him exploring the world or listening to music.

  • Kevin Okemwa
    Do you think we’ll get to a point where AI chatbots give accurate responses to queries?
    Reply
  • naddy69
    To me it looks like "AI" chatbots are doing the best they can do. As long as you remember that "AI" stands for Absolutely Ignorant.

    This hype over "AI" is the biggest load of junk since the ABSURD hype over the Segway scooter. People were making claims like "Entire cities will be built around this".

    20 years later, you see NFL mascots riding around the field during commercial breaks on a Segway. It didn't change anything.

    Hype only goes so far. "AI" is nowhere near ready to change anything. Everyone needs to calm down, stop hyperventilating over "AI" and get back to work. :rolleyes:
    Reply