Quantcast
Sunday, December 22, 2024

Peter Thiel-Backed Cybertech Firm Says Trump ‘Bots’ Swarming GOP Rivals

'There’s never been more noise online...'

(Headline USA) Over the past 11 months, someone created thousands of fake, automated Twitter accounts—perhaps hundreds of thousands of them—to offer a stream of praise for Donald Trump, an Israeli-based cybertech firm claims.

The sprawling bot network was uncovered by researchers at Cyabra, an Israeli tech firm that shared its findings with the Associated Press.

The firm has financial backing, in part, from conservative tech billionaire Peter Thiel, a one-time Trump ally who has tentatively signaled support for Florida Gov. Ron DeSantis in the 2024 GOP primary.

Thiel contributed in 2021 as part of a capital campaign the Tel Aviv-based company had launched to develop software tools that could measure online misinformation and disinformation, according to TechCrunch.

While the identify of those behind the network of fake accounts is unknown, Cyabra’s analysts determined that it was likely created within the U.S.

Besides posting adoring words about the former president, the fake accounts ridiculed Trump’s critics from both parties and attacked Nikki Haley, the former South Carolina governor and U.N. ambassador who is challenging her onetime boss for the 2024 Republican presidential nomination.

When it came to DeSantis, the bots aggressively suggested that the Florida governor couldn’t beat Trump, but would be a great running mate.

“One account will say, ‘Biden is trying to take our guns; Trump was the best,’ and another will say, ‘Jan. 6 was a lie and Trump was innocent,'” said Jules Gross, the Cyabra engineer who first discovered the network. “Those voices are not people. For the sake of democracy I want people to know this is happening.”

The allegations are likely to raise further skepticism among Trump supporters, long jaded by the phony Russia collusion hoax which attributed Russian bots with the former president’s victory over Hillary Clinton in 2016.

That many of the so-called bots are expressing valid points of concern or criticism suggests that there may, in fact, be a counter-campaign to smear Twitter users as bots in order to promote more censorship or disinformation.

But the Cyabra allegations show it may no longer be simply a left-wing point of attack as conservative candidates prepare for a brutal, no-holds-barred primary to determine the party’s future under the direst of political circumstances.

The new pro-Trump network is actually three different networks of Twitter accounts, all created in huge batches in April, October and November 2022. In all, researchers believe hundreds of thousands of accounts could be involved.

The accounts all feature personal photos of the alleged account holder as well as a name. Some of the accounts posted their own content, often in reply to real users, while others reposted content from real users, helping to amplify it further.

“McConnell… Traitor!” wrote one of the accounts, in response to an article in a conservative publication about GOP Senate leader Mitch McConnell, one of several Republican critics of Trump targeted by the network.

One way of gauging the impact of bots is to measure the percentage of posts about any given topic generated by accounts that appear to be fake. The percentage for typical online debates is often in the low single digits. Twitter itself has said that less than 5% of its active daily users are fake or spam accounts.

When Cyabra researchers examined negative posts about specific Trump critics, however, they found far higher levels of inauthenticity. Nearly three-fourths of the negative posts about Haley, for example, were traced back to fake accounts.

The network also helped popularize a call for DeSantis to join Trump as his vice presidential running mate—an outcome that would serve Trump well and allow him to avoid a potentially bitter matchup if DeSantis enters the race.

The same network of accounts shared overwhelmingly positive content about Trump and contributed to an overall false picture of his support online, researchers claimed.

“Our understanding of what is mainstream Republican sentiment for 2024 is being manipulated by the prevalence of bots online,” the Cyabra researchers insisted.

The triple network was discovered after Gross analyzed Tweets about different national political figures and noticed that many of the accounts posting the content were created on the same day. Most of the accounts remain active, though they have relatively modest numbers of followers.

A message left with a spokesman for Trump’s campaign was not immediately returned.

Most bots aren’t designed to persuade people, but to amplify certain content so more people see it, according to Samuel Woolley, a professor and misinformation researcher at the University of Texas whose most recent book focuses on automated propaganda.

When a human user sees a hashtag or piece of content from a bot and reposts it, they’re doing the network’s job for it, and also sending a signal to Twitter’s algorithms to boost the spread of the content further.

“Bots absolutely do impact the flow of information,” Woolley said. “They’re built to manufacture the illusion of popularity. Repetition is the core weapon of propaganda and bots are really good at repetition. They’re really good at getting information in front of people’s eyeballs.”

Until recently, most bots were easily identified thanks to their clumsy writing or account names that included nonsensical words or long strings of random numbers. As social media platforms got better at detecting these accounts, the bots became more sophisticated.

So-called cyborg accounts are one example: a bot that is periodically taken over by a human user who can post original content and respond to users in human-like ways, making them much harder to sniff out.

Bots could soon get much sneakier thanks to advances in artificial intelligence. New AI programs can create lifelike profile photos and posts that sound much more authentic.

Bots that sound like a real person and deploy deepfake video technology may challenge platforms and users alike in new ways, according to Katie Harbath, a fellow at the Bipartisan Policy Center and a former Facebook public policy director.

“The platforms have gotten so much better at combating bots since 2016,” Harbath said. “But the types that we’re starting to see now, with AI, they can create fake people. Fake videos.”

These technological advances likely ensure that bots have a long future in American politics —as digital foot soldiers in online campaigns, and as potential problems for both voters and candidates trying to defend themselves against anonymous online attacks.

“There’s never been more noise online,” said Tyler Brown, a political consultant and former digital director for the Republican National Committee. “How much of it is malicious or even unintentionally unfactual? It’s easy to imagine people being able to manipulate that.”

Adapted from reporting by the Associated Press

Copyright 2024. No part of this site may be reproduced in whole or in part in any manner other than RSS without the permission of the copyright owner. Distribution via RSS is subject to our RSS Terms of Service and is strictly enforced. To inquire about licensing our content, use the contact form at https://headlineusa.com/advertising.
- Advertisement -

TRENDING NOW

TRENDING NOW