Quantcast
Saturday, November 2, 2024

ChatGPT Admits to Pushing Bias, Censorship of Conservative Views

'This will mean allowing system outputs that other people (ourselves included) may strongly disagree with... '

(Jacob Bruns, Headline USA) The people at San Francisco-based OpenAI, the creators of the ChatGPT artificial intelligence robot, recently admitted to censoring the views of conservatives and the right wing in general, the Washington Times reported.

Many on the Right have accused OpenAI of hard-coding leftism into ChatGPT’s system. Some have even called it WokeGPT.

According to the makers, they knew beforehand that the artificial intelligence tool would be likely to deliver responses that favor a left-wing view of the world–complete with Ivy League-style grammar prose–and and disparage anything else.

“Since our launch of ChatGPT, users have shared outputs that they consider politically biased, offensive, or otherwise objectionable. In many cases, we think that the concerns raised have been valid and have uncovered real limitations of our systems which we want to address,” the company said in a recent statement addressing the accusations of bias.

OpenAI also noted that they are working to fix the problem of bias, which has so far caused the ChatGPT to favor one side of the political spectrum–the left.

“Towards that end, we are investing in research and engineering to reduce both glaring and subtle biases in how ChatGPT responds to different inputs,” it said. “In some cases ChatGPT currently refuses outputs that it shouldn’t, and in some cases, it doesn’t refuse when it should. We believe that improvement in both respects is possible.”

Considering the wide political divide, however, one might suspect that creating an “objective” robot might not be possible.

Some on the Left have criticized ChatGPT for not being leftist enough. Vox, for example, published a piece recently claiming that the AI “reinforces” racial and gender stereotypes.

With that in mind, the company said that the programming would be determined within “limits defined by society”–an essential definition that they left unclear.

“This will mean allowing system outputs that other people (ourselves included) may strongly disagree with,” the company said, noting that artificial intelligence could actually serve to hinder enlightenment and reinforce prejudice.

“Striking the right balance here will be challenging – taking customization to the extreme would risk enabling malicious uses of our technology and sycophantic AIs that mindlessly amplify people’s existing beliefs.”

Some have warned for decades that the establishment, which saw its control over the American populace slipping away, would try to use artificial intelligence as a political weapon by which they could continue to manipulate public opinion and secure their power.

Now that time seems to have arrived.

The decision, after all, is not one that can be made purely within the field of coding, given that it contains considerations of the whole of human existence.

Nonetheless, the people at OpenAI say that they remain open to allowing input as to the setting of AI’s limits.

“There will therefore always be some bounds on system behavior. The challenge is defining what those bounds are,” they continued.

“If we try to make all of these determinations on our own, or if we try to develop a single, monolithic AI system, we will be failing in the commitment we make in our Charter to ‘avoid undue concentration of power.’”

Copyright 2024. No part of this site may be reproduced in whole or in part in any manner other than RSS without the permission of the copyright owner. Distribution via RSS is subject to our RSS Terms of Service and is strictly enforced. To inquire about licensing our content, use the contact form at https://headlineusa.com/advertising.
- Advertisement -

TRENDING NOW

TRENDING NOW