(Thérèse Boudreaux, The Center Square) A bipartisan group of 26 U.S. lawmakers have sent letters to seven major tech companies requesting updates on how the platforms plan to counter the growing prevalence of pornographic “deepfakes” on social media.
The number of artificially generated, sexually explicit impersonations of nonconsenting individuals increased by 550% from 2019 to 2023, with deepfake pornography now making up 98% of all deepfake videos online, the lawmakers cited in each of the seven letters addressed to Google, Apple, X, ByteDance, Snapchat, Microsoft and Meta.
“[Deepfake technology] has enabled abusers to create and disseminate realistic, non-consensual pornographic content, causing emotional, psychological, and reputational harm,” the lawmakers, led by Reps. Debbie Dingell, D-Mich., and August Pfluger, R-Texas, wrote. “The spread of this content, often with little recourse for victims, underscores the need for stronger and effective protections.”
The lawmakers highlighted specific examples of how each company had failed to adequately or permanently address instances of deepfake abuse on their platforms.
Google pledged earlier this year to ban advertisements for websites and services that produce deepfake pornography and implement developer restriction policies and in-app reporting mechanisms.
But recent reports, the lawmakers said, have highlighted that Google continues to promote results for nudity-generation apps.
Also in early 2024, Apple removed three deepfake creation apps from its store only after a 404 Media report singled the apps out, showing loopholes still persist in App Store screening processes and developer guidelines.
The lawmakers expressed particular concern about policies implemented by X in May, which allow nudity or sexual content on its platform–including AI-generated deepfakes–as long as the content is “consensually produced and distributed.”
It is unclear how X could determine whether or not it was consensually produced and how it could effectively police such content.
ByteDance, the company which owns TikTok, laid out 2023 standards requiring that all deepfakes or realistic manipulated content must be labeled to indicate they are fake and cannot include private figures or minors but did not outright ban pornographic deepfakes from the platform.
In response to deepfake nude images of a 14-year-old girl on Snapchat in October 2023, Snap. Inc now watermarks all images generated by Snapchat’s AI tools.
The company says it does not allow pornography on the platform and is committed to “ongoing AI literacy efforts,” but gave no clear details on how it is addressing the issue.
The lawmakers asked Microsoft for clarification on the details of its approach to address deepfakes across its platforms, including Bing search results, especially in light of how Microsoft’s Designer tool was used to create pornographic deepfakes of singer Taylor Swift.
They also questioned Meta on why deepfake images were removed for bullying and harassment violations rather than the platform’s pornography policies.
Meta prohibits pornography or sexually explicit ads on its platforms.
The letters requested each company to promptly provide Congress with detailed outlines of how they plan to address the deepfake problem.
With the swearing in of the new Congress in a few weeks, the 26 lawmakers have pledged to continue combatting AI and deepfake exploitation.
“As Congress works to keep up with shifts in technology, Republicans and Democrats will continue to ensure that online platforms do their part to collaborate with lawmakers and protect users from potential abuse,” the lawmakers said.