Quantcast

Google Whistleblower Placed on Leave after Claiming AI Has Reached Singularity

'I know a person when I talk to it...'

(Molly Bruns, Headline USA) A Google whistleblower who claimed the company’s artificial intelligence experiment has gained consciousness was subsequently put on leave, the Western Journal reported.

Engineer Blake Lemoine said he had a conversation with Google’s Language Model for Dialogue Applications, also known as LaMDA, to see if it used hate speech or discriminatory language.

While the AI did not commit any odious thought crimes, Lemoine said his experiment revealed that the program had gained self-awareness and now has feelings and emotions branching outside of its programming.

This alleged discovery alarmed him to the point that he alerted Google Vice President Blaise Agüera y Arcas, as well as Jen Gennai, the company’s head of Responsible Innovation.

...article continued below
- Advertisement -

When Lemoine ‘s claims were not taken seriously, he asked a lawyer to represent the AI, and spoke with “a representative of the House Judiciary committee about what he claims were Google’s unethical activities,” according to the Washington Post.

Lemoine was then placed on administrative leave for violating Google’s confidentiality.

“I think this technology is going to be amazing,” Lemoine, 41, told the Post.

“I think it’s going to benefit everyone, he continued. “But maybe other people disagree, and maybe us at Google shouldn’t be the ones making all the choices.”

...article continued below
- Advertisement -

Google’s official statement regarding the artificial intelligence program said there have been no issues.

“Our team—including ethicists and technologists—has reviewed Blake’s concerns per our AI Principles and have informed him that the evidence does not support his claims,” Google spokesman Brian Gabriel said in a statement, according to the Post. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Lemoine, however, insisted that the AI has become sentient based on conversations he had with it.

“I know a person when I talk to it,” he told the Post.

“It doesn’t matter whether they have a brain made of meat in their head, or if they have a billion lines of code. I talk to them,” he added. “And I hear what they have to say, and that is how I decide what is and isn’t a person.”

He said he and the machine had conversations covering several topics, including death, fear and the themes Victor Hugo’s Les Misérables.

Copyright 2022. No part of this site may be reproduced in whole or in part in any manner without the permission of the copyright owner. To inquire about licensing content, use the contact form at https://headlineusa.com/advertising.
- Advertisement -
- Advertisement -
- Advertisement -

TRENDING NOW

- Advertisement -
- Advertisement -

- Advertisement -