(Robert Jonathan, Headline USA) Constitutional law expert Jonathan Turley says he was smeared by the artificial intelligence robot ChatGPT, in an accusation that potentially could ruin the career of anyone working in academia.
Citing a March 2018 Washington Post story, the bot reportedly claimed a female law student claimed that Turley made inappropriate comments to her, as well as advances, during a class trip to Alaska under the auspices of the Georgetown University Law Center.
Turley explained, however, that the “AI-driven defamation” was fake news.
“There are a number of glaring indicators that the account is false. First, I have never taught at Georgetown University. Second, there is no such Washington Post article. Finally, and most important, I have never taken students on a trip of any kind in 35 years of teaching, never went to Alaska with any student, and I’ve never been accused of sexual harassment or assault.”
Turley, an influential law professor at George Washington University in D.C., told the Post that the invented scandal “was quite chilling” and that “an allegation of this kind is incredibly harmful.”
The issue emerged when as part of a research project, UCLA law professor Eugene Volokh asked the ChatGPT software for five examples of sexual harassment by law professors. Three of the five apparently were false and based on fake, mainstream media articles.
Prof. Turley, a self-described liberal who has earnestly spoken out against the way the Democrats weaponize the legal system against their political opponents, particularly former President Trump, revealed that he has been the recipient of death threats and attempts to get him fired “due to my conservative legal opinions.”
Politically-motivated false claims contained in nonexistent, fabricated news articles as spread or even generated by AI technology takes this vilification to an exponential level, he implied.
In this instance, Turley wondered “why would an AI system make up a quote, cite a nonexistent article and reference a false claim? The answer could be because AI and AI algorithms are no less biased and flawed than the people who program them.”
You can be defamed by AI and these companies merely shrug that they try to be accurate. In the meantime, their false accounts metastasize across the Internet…https://t.co/uqiIf01n1s
— Jonathan Turley (@JonathanTurley) April 6, 2023
“Even if people can prove, as in my case, that a story is false, companies can ‘blame it on the bot’ and promise only tweaks to the system,” Turley said.
“The technology creates a buffer between those who get to frame facts and those who get framed. The programs can even, as in my case, spread the very disinformation that they have been enlisted to combat.”
Prof. Volokh also warned about the potential for AI to create chaos in the personal or professional lives of individuals.
He told the Post that “This is going to be the new search engine. The danger is people see something, supposedly a quote from a reputable source … [and] people believe it.”
Section 230 of the Communications Decency Act of 1996 (which is currently the subject of a pending U.S. Supreme Court case) protects Internet platforms from most liability lawsuits prompted by user-posted content.
The Post noted, however, that “experts say it’s unclear whether tech companies will be able to use that shield if they were to be sued for content produced by their own AI chatbots.”
Consistent with Silicon Valley ideology, ChatGPT has already been perceived of having a left-wing spin that includes censoring conservative views.
Insofar as the Trump indictment in Manhattan is concerned, Turley described it, among other things, as “legally pathetic.”