(Molly Bruns, Headline USA) As concerns surrounding the potential world-ending power of artificial intelligence continue to grow, other unanticipated consequences of the computer programming have raised red flags.
According to ARS Technica, the AI industry generated a massive amount of extremely lifelike, sexually explicit images of children, stoking the concern of child safety experts.
“Children’s images, including the content of known victims, are being repurposed for this really evil output,” said Rebecca Portnoff of Thorn, a nonprofit focused on the safety of children.
“Victim identification is already a needle in a haystack problem, where law enforcement is trying to find a child in harm’s way. The ease of using these tools is a significant shift, as well as the realism. It just makes everything more of a challenge.”
Not only has the increase of AI made it more difficult for law enforcement to protect children, but validating the images as real or fake could take significant amounts of time.
There are safety tools that mark images and detect when they are reshared on online platforms; however, those tools only work on previously reported images. The newfound AI-imaging complicated the issue for law enforcement and child safety experts, as pedophiles started taking advantage of the reported grey area.
A poll done by ActiveFence revealed that an estimated 80% of 3,000 respondents admitted to “us[ing] or intend[ing] to use AI tools to create sexual abuse images.”
The photos spread on more than just the dark web—social media, public forums and other websites were all guilty of hosting the pornographic content.
“Once circulated, victims can face significant challenges in preventing the continual sharing of the manipulated content or removal from the Internet,” said a recent alert from the FBI. “This leaves them vulnerable to embarrassment, harassment, extortion, financial loss, or continued long-term re-victimization.”