Quantcast
Sunday, April 28, 2024

Washington Grapples with AI’s Brave New World

'The spies from the ministry told me that the Uyghur genocide was the beginning of an experiment in AI surveillance... '

(By Susan Crabtree, RealClear Wire) If viewers look closely enough at a recently released video by Florida Gov. Ron DeSantis’s campaign displaying a collage of images of President Donald Trump embracing and kissing former chief medical advisor Dr. Anthony Fauci, there are telltale signs of an AI deepfake.

The positioning of Trump’s and Fauci’s faces is unnatural, and the text on the White House seal in the background seems off – almost Cyrillic-looking. A Google search produces no actual images of the two men embracing, let alone of Trump smooching the infectious disease expert and one of his top COVID-19 response leaders.

Even though establishing the images’ authenticity requires close inspection, there’s little subtlety in the DeSantis campaign’s message. Fauci is a top target of conservative anger and loathing for advocating public showdowns, mask-wearing, and near-universal vaccinations during the COVID pandemic. The video begins with dramatic music and depictions of Trump repeatedly telling contestants on The Apprentice, “You’re Fired” on his business-themed reality show, then cuts to Trump’s remarks calling Fauci a “wonderful guy” and repeated explanations that he couldn’t fire Fauci because there would be a firestorm on the left.

Campaign watchdog groups are crying foul, pressing the DeSantis campaign to take down the video, but in an already free-wheeling battle for the GOP nomination, his supporters may consider the AI-generated ad fair play.

Just a few weeks ago, Trump reacted to DeSantis’ glitch-filled campaign roll-out on Twitter with an obviously photo-shopped image the former president posted on Truth Social of DeSantis riding a rhinoceros. Trump also shared a fake campaign video of DeSantis announcing his campaign at what appeared to be a Twitter Spaces event similar to the one DeSantis used. But the participants were altered to include George Soros, the World Economic Forum’s Klaus Schwab, Dick Cheney, the FBI, Adolf Hitler and the devil.

Robert Weissman, president of Public Citizen, a liberal watchdog group, called all deceptive generative AI ads and videos “a significant threat to democracy as we know it” and urged every party and candidate to commit not to employ the deepfakes, which “definitionally involve tricking the public into believing something that is not true.”

On Capitol Hill, senators on the Judiciary Committee human rights subcommittee were grappling with the brave new AI campaign world amid a growing number of calls to regulate the emerging technology. Senators and experts who appeared before the panel warned of the dangers the technology poses to elections around the world. They debated its profound implications for civil rights, the criminal justice system, privacy, the potential disruption of labor markets, and the proliferation of scams, as well as its potential for producing life-saving technological health innovations.

The most dramatic testimony focused on the use of AI for sinister or misleading uses.

“In the past, election operatives have spread destructive information, but now bad actors can easily use AI to exponentially grow and personalize voter suppression and other targeting,” Alexandra Reeve Givens, president of the Center for Democracy & Technology, told the senators. “AI-generated images can also impact public understanding of political figures and events. Videos and images have already been altered to compromise officials.”

One hearing witness, Jennifer DeStefano of Scottsdale, Ariz., described a horrifying scam attempted against her in which the perpetrators managed to use AI to capture her teenage daughter’s voice and desperate cries for help and then threaten to harm the teen if DeStefano didn’t immediately provide a ransom.

Before transferring any funds, a terrified DeStefano learned that her daughter was safe at home, and the phone call had used AI to supercharge a deeply disturbing attempted extortion scam. The Scottsdale police department declined to investigate because a ransom was never paid, a position Sen. Jon Ossoff, a Georgia Democrat who chaired the hearing, said was woefully inadequate and promised to try to rectify.

“We intend to look into that more – namely in existing wire fraud statutes and other state and federal statutes that may create a criminal claim for precisely the circumstances you raised,” Ossoff told DeStefano. “This conduct should be criminal and severely punished. You have my commitment to identify paths to ensuring that families are protected from what you had to go through.”

Sen. Marsha Blackburn, a Tennessee Republican, pointed to China’s widescale use of AI to carry out its genocide and mass incarceration of the Uyghur Muslim minority. She highlighted Beijing’s 2017 National AI Development Plan, in which China declared its goal of becoming the world leader in AI by 2030, and a McKinsey & Company analysis predicting that China’s growth in AI could account for up to $600 billion in value over the next decade.

“We know that China has used technology – for example, exploiting vulnerabilities in Apple’s iPhones – to track the Uyghur Muslims in Xinjiang province,” Blackburn said. “The CCP uses facial recognition to track citizens throughout the country, and according to one report, logged details about people as young as nine days old.”

“This should concern everyone who cares about preserving the freedoms and democratic values we champion in America,” she added.

Geoffrey Cain, a senior fellow at the Foundation for American Innovation, previously worked as an investigative journalist in China and was one of the first reporters to document and expose the massive, systematic scale of the CCP’s surveillance state in Xinjiang, the far western region home to most of the nation’s Uyghur population.

Since at least 2017, the CCP has used its vast AI-powered facial recognition surveillance system to create the largest internment of ethnic minorities since the Holocaust. As part of Cain’s reporting for his book, “The Perfect Police State,” he recalled how he moved to Turkey and tracked down defected former Chinese intelligence officers from the Ministry of State Security.

“The spies from the ministry told me that the Uyghur genocide was the beginning of an experiment in AI surveillance – that the CCP plan was to enlist companies and then expand the experiment nationwide in China and globally wherever possible,” he said.

Recently, Cain said, the CCP unveiled AI-powered alarms that notify the police when someone unfurls a banner when a foreign journalist is traveling to certain parts of the country and when someone from an ethnic minority is present.

ByteDance, the Chinese firm that owns the popular social media app TikTok, has recently been accused by a whistleblower of running an in-house CCP committee that has access to all the app’s data, including data stores in the U.S. – a contradiction to the company executives’ testimony to Congress. Other Chinese firms under U.S. sanctions have emerged as billion-dollar companies with the backing of the Chinese state and the involvement of American venture capital firms.

“Given the CCP’s enormous success at censorship so far, I believe that it will once again succeed at coercing and co-opting Chinese and American technology firms and will transform generative AI into a tool of state oppression,” he said. “We must abandon the misguided idealism of working with AI companies and government institutes in China.”

Considering China’s horrific human rights record, Cain also warned against any effort to allow Beijing to help build the “guard rails” for AI, a suggestion that Sam Altman, the chief executive officer of Open-AI, proposed at a Beijing conference on Friday.

Altman called for enhanced collaboration between the U.S. and China on AI development in a keynote address for a conference hosted by the Beijing Academy of Artificial Intelligence. OpenAI’s products, such as ChatGPT, are not yet available in China. Still, Altman has been eager to meet with policymakers around the world to encourage and influence the development of AI regulations as Congress and international bodies contemplate new laws aimed at protecting privacy and consumers.

Last month, Altman, whose San Francisco start-up rattled the AI world after it released ChatGPT last year, told Congress that government intervention “will be critical to mitigating the risks of increasingly powerful” AI systems.

Directly countering Altman, Cain argued that the United States should lead the way in building democratic “human-rights first” AI standards, working with the United Nations and other international organizations. He also urged Congress to strengthen U.S. micro-chip supply chains and pass new laws prohibiting American technology companies from helping China build its AI surveillance state, laws that go beyond sanctions to include “prison time” for executives who help develop any form of AI in partnership with a Chinese entity.

“Sanction and export controls are not enough,” Cain said.

Copyright 2024. No part of this site may be reproduced in whole or in part in any manner other than RSS without the permission of the copyright owner. Distribution via RSS is subject to our RSS Terms of Service and is strictly enforced. To inquire about licensing our content, use the contact form at https://headlineusa.com/advertising.
- Advertisement -

TRENDING NOW

TRENDING NOW