Quantcast
Friday, April 26, 2024

Major AI Developers Disconcertingly Look to Biden Admin for ‘Safeguards’

'History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations...'

(Headline USA) Even as the scope of Big Tech’s collusion with deep-state Democrats to throw the 2020 election becomes more than just speculation, the same companies are now looking to the regime they helped install to direct “safety” in what many experts have warned could be the greatest technological threat humanity has ever faced: artificial intelligence.

The implications of what power AI has to control and manipulate information are chilling. And yet, Amazon, Google, Meta, Microsoft and other companies that are leading the development of the technology have agreed to meet a set of AI safeguards brokered by none other than President Joe Biden’s administration.

The White House said Friday that it has secured voluntary commitments from seven U.S. companies meant to ensure their AI products are “safe” before they release them.

Some of the commitments call for third-party oversight of the workings of commercial AI systems, though they don’t detail who will audit the technology or hold the companies accountable.

Nonetheless, red flags have already been raised that the far-left companies, who were all to eager to assist corrupt government officials in violating the First Amendment rights of conservatives before were likely to do so again by programming radical left-wing biases into their AI algorithms.

A surge of commercial investment in generative AI tools that can write convincingly human-like text and churn out new images and other media has brought public fascination as well as concern about their ability to trick people and spread disinformation, among other dangers. The Biden administration hopes not only to be able to regulate the flow of information, but also to harness it to the political advantage of its own like-minded idealogues.

The four tech giants, along with ChatGPT-maker OpenAI and startups Anthropic and Inflection, have committed to security testing “carried out in part by independent experts” to guard against major risks, such as to biosecurity and cybersecurity, the White House said in a statement.

The companies have also committed to methods for reporting vulnerabilities to their systems and to using digital watermarking to help distinguish between real and AI-generated images known as deepfakes.

They will also publicly report flaws and risks in their technology, the White House said, including effects on fairness and bias—although both are subjective matters whose impact may be relative to the biases of those doing the reporting.

The voluntary commitments are meant to be an immediate way of addressing risks ahead of a longer-term push to get Congress to pass laws regulating the technology.

Some advocates for AI regulations said Biden’s move is a start but more needs to be done to hold the companies and their products accountable.

“History would indicate that many tech companies do not actually walk the walk on a voluntary pledge to act responsibly and support strong regulations,” said a statement from James Steyer, founder and CEO of the nonprofit Common Sense Media.

Senate Majority Leader Chuck Schumer, D-N.Y., who has been a robust advocate of government coercion and censorship when it gives an advantage to Democrats, has said he will introduce legislation to regulate AI.

He has held a number of briefings with government officials to educate senators about an issue that’s attracted bipartisan interest.

A number of technology executives have called for regulation, and several went to the White House in May to speak with Biden, Vice President Kamala Harris and other officials.

But some experts and upstart competitors worry that the type of regulation being floated could be a boon for deep-pocketed first-movers led by OpenAI, Google and Microsoft as smaller players are elbowed out by the high cost of making their AI systems known as large language models adhere to regulatory strictures.

The software trade group BSA, which includes Microsoft as a member, said Friday that it welcomed the Biden administration’s efforts to set rules for high-risk AI systems.

“Enterprise software companies look forward to working with the administration and Congress to enact legislation that addresses the risks associated with artificial intelligence and promote its benefits,” the group said in a statement.

A number of countries have been looking at ways to regulate AI, including European Union lawmakers who have been negotiating sweeping AI rules for the 27-nation bloc.

U.N. Secretary-General Antonio Guterres recently claimed the United Nations was “the ideal place” to adopt global standards and appointed a board that will report back on options for global AI governance by the end of the year. Given the U.N.’s equally creepy positions on globalist overreach, that is likely to generate skepticism and backlash if U.S. policymakers were to embrace it.

The United Nations chief also said he welcomed calls from some countries for the creation of a new U.N. body to support global efforts to govern AI, inspired by such models as the International Atomic Energy Agency or the Intergovernmental Panel on Climate Change.

The White House said Friday that it has already consulted on the voluntary commitments with a number of countries.

Adapted from reporting by the Associated Press

Copyright 2024. No part of this site may be reproduced in whole or in part in any manner other than RSS without the permission of the copyright owner. Distribution via RSS is subject to our RSS Terms of Service and is strictly enforced. To inquire about licensing our content, use the contact form at https://headlineusa.com/advertising.
- Advertisement -

TRENDING NOW

TRENDING NOW