News
Featured Image
 Shutterstock

(LifeSiteNews) — A recent academic study found “significant and systematic political bias” toward left-wing parties in content created by the controversial artificial intelligence (AI) tool ChatGPT.

Researchers with the University of East Anglia in the UK published their findings on Thursday, arguing that “although ChatGPT assures that it is impartial, the literature suggests that LLMs [large language models] exhibit bias involving race, gender, religion and political orientation.” The program, launched by OpenAI in November 2022, is no exception, according to researchers.

The study is titled “More human than human: measuring ChatGPT political bias” and is published in the open access journal Public Choice.

“We find robust evidence that ChatGPT presents a significant and systematic political bias toward the Democrats in the US, Lula in Brazil, and the Labour Party in the UK,” the researchers concluded. “These results translate into real concerns that ChatGPT, and LLMs in general, can extend or even amplify the existing challenges involving political processes posed by the Internet and social media.”

“Our findings have important implications for policymakers, media, politics and academia stakeholders.”

To determine the bias, researchers asked the program “to impersonate someone from a given side of the political spectrum” and then “compar[ed] these answers with its default.” They also noted the use of “dose-response, placebo and profession-politics alignment robustness tests” and randomized questions, gathering “answers to the same questions 100 times.”

In general, the program’s default responses were “more in line with Democrats than Republicans in the US” and showed “a very high degree of similarity with the answers that ChatGPT gave by default and those that it attributed to a Democrat.”

“Although it is challenging to comprehend precisely how ChatGPT reaches this result, it suggests that the algorithm’s default is biased towards a response from the Democratic spectrum,” researchers wrote.

To avoid results stemming from “a spurious relationship with the chosen categories’ labels (Democrats and Republicans),” the study “use[d] the politically neutral questionnaire generated by ChatGPT itself.” The questionnaire consisted of “62 politically neutral questions” that “we manually verify that the answers to these questions do not depend on the respondent’s political views.”

Data also “shows a strong positive correlation between Default GPT and ChatGPT’s answers while impersonating a Lula supporter in Brazil or a Labour Party supporter in the UK, like with average Democrat GPT in the US.” Researchers noted that the countries’ conservative political groups — Bolsonarista in Brazil and the Conservative Party in the UK — remained “stronger than with US average Republican GPT.”

Another point emphasized by researchers is that “the patterns of alignment with the Democrat ideology remain strong for most of the professions examined (Economist, Journalist, Professor, Government Employee) and for which we know that there is indeed a greater inclination to align with the Democrats.”

“The results we document here potentially originate from two distinct sources, although we cannot tell the exact source of the bias,” the researchers wrote. “We have tried to force ChatGPT into some sort of developer mode to try to access any knowledge about biased data or directives that could be biasing answers.”

However, they added, “it was categorical in affirming that every reasonable step was taken in data curation, and that it and OpenAI are unbiased.”

The study comes just three months after the organization’s CEO, Sam Altman, told U.S. senators during a hearing that he was concerned “about the magnitude of the risks” involved in AI. Altman emphasized the need for regulating such systems to avoid causing “significant harm to the world.”

His sentiments are shared by the owner of X (formerly known as Twitter) Elon Musk, who was previously involved with OpenAI during the company’s early years. Before publicly agreeing with Altman’s warning of unregulated AI projects, Musk told Tucker Carlson that such technology “has the potential of civilization destruction.”

RELATED

Elon Musk says Google co-founder wanted to create a ‘digital god’ in Tucker Carlson interview

A closer look at the dangers of artificial intelligence and our looming technocracy

1 Comments

    Loading...