(Dmytro “Henry” Aleksandrov, Headline USA) The Biden administration decided to implement artificial intelligence regulations as another push for their leftist ideology.
According to the Post Millennial, the Biden administration wants to install AI that will center around the so-called “civil rights” and “equal opportunity” to fight the “unlawful bias.”
Following the release of a joint statement between the Federal Trade Commission, the Department of Justice‘s Civil Rights Division, the US Equal Opportunity Employment Commission and the Consumer Financial Protection Bureau, the plans were made public on Tuesday.
“Today, the use of automated systems, including those sometimes marketed as ‘artificial intelligence’ or ‘AI,’ is becoming increasingly common in our daily lives. We use the term ‘automated systems’ broadly to mean software and algorithmic processes, including AI, that are used to automate workflows and help people complete tasks or make decisions.”
“Private and public entities use these systems to make critical decisions that impact individuals’ rights and opportunities, including fair and equal access to a job, housing, credit opportunities and other goods and services.”
“These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes,” the statement added.
Even though many people are concerned about the increased use of AI and its bias toward conservative political viewpoints — with ChatGPT being the infamous example — the Biden administration doesn’t care.
Lina Khan, a chair of the FTC who was appointed by the Biden administration, will issue regulations that favor leftist views under the guise of combatting “historical bias.”
“Automated system outcomes can be skewed by unrepresentative or imbalanced datasets, datasets that incorporate historical bias, or datasets that contain other types of errors,” according to the statement.
“Automated systems also can correlate data with protected classes, which can lead to discriminatory outcomes.”