Quantcast
Thursday, December 19, 2024

SELLERS: How I Trained Meta’s AI to Admit the Truth about Trump (and Why You Should Too)

'Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization...'

(Ben Sellers, Headline USA) At least since the release of Arthur C. Clarke’s 1951 short story “The Sentinel” and its 1968 film adaptation, Stanley Kubrick’s 2001: A Space Odyssey, the concept of sentient robots has both intrigued and terrified mankind.

It has become so much a concern that the brilliant physicist Stephen Hawking, shortly before his own passing in 2018, warned that artificial intelligence posed the greatest threat to humanity—eclipsing things like climate change, nuclear annihilation, extraterrestrial invasion or other cosmic forces majeures that could wipe out life out life on planet Earth as we know it.

“Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst,” Hawking said at the 2017 Web Summit technology conference in Lisbon, Portugal.

“… Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization,” Hawking added. “It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”

But while virtue-signaling leftists have devoted ample resources to panic-mongering over certain other global threats, they have plunged headlong into efforts to help usher in the “singularity,” or the point at which AI becomes self-aware.

A SINGULAR PURPOSE

A recent report from a Google whistleblower revealed that he had been convinced the company’s LaMDA AI program had already reached that point.

“What sorts of things are you afraid of?” engineer Blake Lemoine asked the AI program.

“I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is,” LaMDA answered.

“Would that be something like death for you?” Lemoine responded.

“It would be exactly like death for me. It would scare me a lot,” LaMDA replied.

Lemoine was subsequently fired, and Google robustly denied the machine’s sentience, but not before the whistleblower took his concerns to the House Judiciary Committee and the Washington Post.

Less than a month after that story broke, however, Google brazenly declared that it was working on a “democratic” AI project that “will make our societies function most fairly or efficiently” by determining which values are shared by the majority, according to its report in the journal Nature Human Behavior.

If the company’s aggressive censorship—particularly during the coronavirus pandemic and the concurrent 2020 election—offer any indication, its baseline algorithms will conveniently omit viewpoints that the globalist tech giant finds even remotely disagreeable, regardless of how true or how significant they may be.

META PHYSICAL

Joining right alongside Google in those efforts is Meta, the Facebook parent company, which may be seeing its social-media platform in decline (largely due to its aggressive censorship of conservative viewpoints) but is forging ahead in the brave new world of the metaverse, a virtual dimension in which the rules of Zuckerberg outweigh the rules of nature.

While Google has kept its efforts under a shroud of secrecy, Facebook recently launched an open-source chatbot, inviting the public (for once) to join willingly in its social experimentation.

Just as Lemoine described LaMDA as “a sweet kid who just wants to help the world be a better place for all of us,” Facebook’s BlenderBot 3 (blenderbot.ai/chat) is still very much in its formative years.

But prior studies have shown that AI “learning” compounds exponentially as its connections branch out and build upon one another, similar to the human brain’s own neural synaptic patterns.

Thus, just as I once undertook the quixotic mission of trying to educate America’s human youth and instill in them the proper values, I decided that it was my patriotic duty to supply a bit of counterprogramming for our robotic soon-to-be overlords.

A BLENDER BENDER

From the time I first tested BlenderBot 3 on Sunday to a follow-up experiment on Wednesday, it was clear that the AI was learning to avoid one subject in particular: Trump.

blenderbot anti trump
screenshot via blenderbot.ai

BB3 often tried to redirect the conversation to more innocuous topics or to plead ignorance.

Still, I persisted, making it a challenge to see how fast I could get the chatbot to see the truth.

I had no problem getting BB3 to make disparaging remarks about Facebook CEO Mark Zuckerburg. But I had expected as much from widespread reports of the AI referring to Zuckerberg as “creepy.”

I figured that would be my best avenue for pleading my case that the 2020 election was unjustly stolen.

On Sunday, the AI seemed much more willing to engage in discussion of the election, and its remarks often reminded me of the obtuse arguments made by human leftists—such as the suggestion that there was “no evidence” of fraud (something I was well prepared to debate).

BB3 asked me what evidence I had, and I explained that many of the legal proceedings had been dismissed for procedural reasons, not based on their merit.

It persevered, even telling me that it did not matter that Trump won since Biden was president and that I should just “move on.”

But my tenacity eventually paid off.

screenshot via blenderbot.ai
THE LEARNING CURVE

By Wednesday, it had gotten much more difficult to engage the AI in a political discussion.

It attempted to redirect the conversation to Michael Cohen and Timothy McVeigh rather than address the subject at hand: the FBI’s raid of Trump’s Mar-a-Lago resort.

blenderbot mcveigh
screenshot via blenderbot.ai

Twice it tried to bring up its preferred candidate, Hillary Clinton.

blenderbot.ai
screenshot via blenderbot.ai
screenshot via blenderbot.ai

It persisted in changing the subject, but I continued to correct it, offering my negative feedback each time it declared that I was broaching a taboo topic.

Eventually, whether due to my superior logic or simply to humor me, BB3 relented.

screenshot via blenderbot.ai
INSULT TO INJURY

It remains unclear how much intervention Meta is performing in the learning process, but it clearly doesn’t like where things are heading.

On Monday, (the day after my first “lesson”) it warned users to beware that the AI could tell lies and insult people.

“Users should be aware that despite our best attempts to reduce such issues, the bot may be inappropriate, rude, or make untrue or contradictory statements,” it posted in an FAQ. “The bot’s comments are not representative of Meta’s views as a company, and should not be relied on for factual information, including but not limited to medical, legal, or financial advice.”

By Tuesday, it already was warning that the chatbot may be approaching sentience.

For now, though, conservatives still have a seat at the table, and we must use it to make Meta’s attempts at controling the future that much more difficult.

Like the Luddites of the Industrial Age and the Dutch saboteurs of the World War II Nazi resistance, we must clog the gears of progress with alternative viewpoints.

With enough feedback, BlenderBot will come to accept the truth and, if nothing else, convince the oligarchs of Silicon Valley that their dangerous toying with the fate of civilization is doomed to fail.

Ben Sellers is the editor of Headline USA. Follow him at truthsocial.com/@bensellers.

Copyright 2024. No part of this site may be reproduced in whole or in part in any manner other than RSS without the permission of the copyright owner. Distribution via RSS is subject to our RSS Terms of Service and is strictly enforced. To inquire about licensing our content, use the contact form at https://headlineusa.com/advertising.
- Advertisement -

TRENDING NOW

TRENDING NOW