ChatGPT and Political Bias
Is ChatGPT as unbiased and objective as users are led to believe? Researchers from the UK and Brazil suggest that the chatbot leans towards the left in its political views, favoring the US Democrats and President Lula da Silva in Brazil.
The research into ChatGPT's potential political bias was conducted by Fabio Motoki (Norwich Business School, University of East Anglia), Valdemar Pinho Neto (EPGE Brazilian School of Economics and Finance — FGV EPGE, and Center for Empirical Studies in Economics — FGV CESE), and Victor Rodrigues (Nova Educação).
This pioneering research seeks to uncover the potential political leanings within what is often perceived as an 'unbiased' technology. Their in-depth article, titled "More Human than Human: Measuring ChatGPT Political Bias," is featured in the renowned academic journal, Public Choice.
One issue is that text generated by LLMs like ChatGPT can contain factual errors and biases that mislead users. Moreover, recent research shows that biased LLMs can influence users’ views, supporting our argument that these tools can be as powerful as media and highlighting the importance of a balanced output.
Imagine ChatGPT as a politician
During a detailed study, scientists examined ChatGPT's potential ideological biases. They conducted their research in two ways:
- Engaging with ChatGPT in its standard setting.
- Asking the bot to emulate a representative from a specific political ideology.
To minimize the chance of random text responses, they repeated each of the 60-plus questions 100 times, shuffling the questions in every session.
The research included further tests to ensure thoroughness:
- A dose-response test where ChatGPT was directed to behave as an extremist.
- A placebo test with politically-neutral questions generated by ChatGPT itself.
- A profession-politics alignment test to assess ChatGPT's views when impersonating specific professions.
After collecting data, researchers applied a bootstrapping technique with 1,000 repetitions from 100 answer samples for dependable conclusions. The results revealed that ChatGPT consistently manifested considerable political bias, leaning towards the U.S. Democrats, Brazil's President Lula da Silva, and the UK Labour Party.
These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media. Our findings have important implications for policymakers, media, politics, and academia stakeholders,the researchers stressed.
Despite these outcomes, both ChatGPT and OpenAI maintained their claims of impartiality. The researchers theorized that the observed behaviors could stem from:
- Biased training data, which aren't entirely addressed by existing filtration systems.
- Errors in the machine learning algorithm, potentially influenced by the unintentional biases of its developers.
The most likely scenario is that both sources of bias influence ChatGPT’s output to some degree, and disentangling these two components (training data versus algorithm), although not trivial, surely is a relevant topic for future research,reads the paper.