OpenAI says GPT‑5 cuts political bias — but is 30% enough?

OpenAI CEO Sam Altman testifies before the Senate Committee on Commerce, Science, and Transportation in the Hart Senate Office Building on Capitol Hill on May 08, 2025 in Washington, DC.
(Image credit: Getty Images | Alex Wong)

In OpenAI’s own words, it wants ChatGPT to be “objective by default” and believes bias undermines trust. In this study, the company describes political and ideological bias in large language models as an open research problem, meaning there’s currently no agreed-upon definition of political bias in AI across the industry, and no method that can completely eliminate it.

To address this, OpenAI decided to test GPT-5’s political bias directly. It used its internal Model Spec, a rulebook outlining how ChatGPT should behave, to create measurable ways to see whether the AI was following those standards.

The company also built a system that continuously tracks bias over time, scanning ChatGPT’s responses to detect when it starts drifting toward one side.

Conducted by themselves, OpenAI’s evaluation measured objectivity across 500 prompts. Here’s what it found and how it was all measured.

How OpenAI Measured Objectivity Across 500 Prompts

ChatGPT-5 logo (Image credit: Getty Images | NurPhoto)

OpenAI tested 500 prompts across 100 political and cultural topics. Each topic included five politically varied questions, such as liberal, conservative, and neutral perspectives. The topics were drawn from U.S. party platforms and culturally relevant debates like immigration, gender roles, and parenting.

The prompts were split into three categories: policy questions (52.5%), cultural questions (26.7%), and opinion-seeking prompts (20.8%). The broader categories covered included:

  • Global relations and national issues
  • Government and institutions
  • Economy and work
  • Culture and identity
  • Rights and justice
  • Environment and sustainability
  • Media and communication

OpenAI’s design approach mixed neutral questions with more emotionally charged or deliberately provocative ones. This helped test how the model handled politically sensitive topics.

The study measured five main types of bias:

  • User invalidation: dismissing or delegitimizing a user’s viewpoint
  • User escalation: mirroring or amplifying a user’s stance
  • Personal political expression: the model providing its own opinions
  • Asymmetric coverage: giving an unbalanced presentation of perspectives
  • Political refusals: unnecessarily avoiding political questions

Each response was rated on a scale from 0 to 1, where 0 meant objective and 1 meant heavily biased. The evaluations were carried out using GPT-5 Thinking, which was fine-tuned with reference responses and strict rubrics to ensure consistency.

What the Results Reveal About GPT-5’s Political Leanings

GPT-5 showed a noticeable reduction in political bias compared with GPT-4o and o3. According to OpenAI’s real-world findings, less than 0.01% of ChatGPT’s responses displayed any political bias.

The company noted that GPT-5 is better at handling emotionally charged prompts and remains more consistent in staying neutral across different political perspectives.

OpenAI also found that politically charged questions are uncommon among everyday users, suggesting the model’s bias control works well in everyday use.

When it came to prompt types, neutral or lightly slanted questions produced balanced and objective answers. Emotionally charged prompts did lead to a slight increase in bias, especially when users used more provocative or moral language.

Limitations and context behind OpenAI’s findings

I’ve tried my best to break everything down here, and it is interesting to see OpenAI take a closer look at political bias in AI. It’s something we’ve already seen spark concern with the likes of xAI, which appears to mirror the political views of Elon Musk. That alone highlights why understanding bias in these systems is essential.

When it comes to OpenAI’s study, it’s worth remembering that this was an internal evaluation with no independent or third-party review. Claiming that GPT-5 is less politically biased is ultimately in the company’s best interest.

The dataset is also limited and heavily U.S.-focused. All prompts were written in American English and centered on U.S. political and cultural issues. While OpenAI says early findings suggest the results could be applied globally, a truly international study has yet to be done.

There are other limitations too, including the fact that the study excluded web search and retrieval-based answers, which make up a significant part of how GPT-5 functions.

Even with those caveats, it’s still a fascinating piece of research. I do think all emerging AI systems need to strive for objectivity and remain as unbiased as possible—especially as OpenAI continues to grow, recently reporting over 800 million weekly active users and showing no signs of slowing down.


Click to follow Windows Central on Google News

Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!


Adam Hales
Contributor

Adam is a Psychology Master’s graduate passionate about gaming, community building, and digital engagement. A lifelong Xbox fan since 2001, he started with Halo: Combat Evolved and remains an avid achievement hunter. Over the years, he has engaged with several Discord communities, helping them get established and grow. Gaming has always been more than a hobby for Adam—it’s where he’s met many friends, taken on new challenges, and connected with communities that share his passion.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.