(Victoria Cook, Headline USA) The phone is ringing, but you don’t recognize the number. It could be a scam, you think, but maybe your hospital is calling you to confirm your appointment for next week. So you answer.
Surprised, you hear your daughter’s voice, telling you, in between sobs, that she is in trouble and she needs your help. If you could just send her $300, she can leave this horrible situation.
But that’s not your daughter. The voice that sounds so accurate and human is none other than an artificial-intelligence-generated voice. Her trouble is none other than a ploy by scammers to pull on your heartstrings and get your money.
THREE SECONDS, A LIFETIME OF REGRET
While the above scenario is hypothetical, in reality, scams like this happen every day across the world.
Within the U.S. alone, an average of 2.5 billion scam calls occur per month, breaking down to nine scam calls per individual, reported reported the consumer fraud watchdog True Caller.
“Scammers only need three seconds of audio to clone your voice,” said Lisa Grahame, chief information security officer at Starling Bank.
Grahame explained that scammers can also use your online videos to identify family members and friends.
Starling Bank, located in the United Kingdom, is leading a campaign against AI scams. It encourages everyone to choose a safe phrase to use with their loved ones.
Confirming the phrase over the phone acts as a failsafe, eliminating opportunities for scammers to commit successful identity fraud.
A FINE LINE FOR FREEDOM
Though AI voice copying has led to scams, it also led to parodies of notable public figures saying and doing things they wouldn’t normally.
Last year alone, audio clips of President Donald Trump, President Joe Biden, and President Barack Obama playing a video game and joking with each other made their way across TikTok, Instagram, and YouTube.
At its foundation, AI voice copying is a tool. With free AI technologies available at a quick Google search, people have the responsibility and freedom to use those technologies.
This unbridled freedom may not last for long, however, with politicians and AI companies looking to pass restriction policies. For example, California Gov. Gavin Newsom passed a law this week decreeing that satire and parody of political figures were not permissible if they had the potential to create election disinformation.
On its face, striking a balance between creative AI usage and nefarious scams appears relatively easy. The intent and results for each are different, even if the tools are the same. The challenge for politicians will be protecting free speech while also protecting their constituents from scammers.
That’s assuming, of course, that the politicians themselves are acting in good faith through the restrictions they are seeking to impose.
RISK VS. REWARD
As AI scams evolve in complexity, concern about how to prevent those scams and keep the truth unobscured grows.
Starling Bank and others hope that increased awareness of the scams will attenuate the negative effects.
David Hanson, British minister of state at the Home Office with Responsibility for Fraud, joined the Starling Bank initiative against AI scams.
“AI presents incredible opportunities for industry, society and governments,” Hanson said. “But we must stay alert to the dangers, including AI-enabled fraud.”