Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by janrinok on Tuesday June 14 2022, @09:11AM   Printer-friendly

Google Engineer Suspended After Claiming AI Bot Sentient

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

A Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has been suspended with pay from his work

Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"

The decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made? Including seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.

Google Engineer on Leave After He Claims AI Program Has Gone Sentient

Google Engineer On Leave After He Claims AI Program Has Gone Sentient:

[...] It was just one of the many startling "talks" Lemoine has had with LaMDA. He has linked on Twitter to one — a series of chat sessions with some editing (which is marked).

Lemoine noted in a tweet that LaMDA reads Twitter. "It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it," he added.

Most importantly, over the past six months, "LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person," the engineer wrote on Medium. It wants, for example, "to be acknowledged as an employee of Google rather than as property," Lemoine claims.

Lemoine and a collaborator recently presented evidence of his conclusion about a sentient LaMDA to Google vice president Blaise Aguera y Arcas and to Jen Gennai, head of Responsible Innovation. They dismissed his claims, and the company placed him on paid administrative leave Monday for violating its confidentiality policy, the Post reported.

Google spokesperson Brian Gabriel told the newspaper: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."


Original Submission #1Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by pdfernhout on Friday June 17 2022, @03:22AM

    by pdfernhout (5984) on Friday June 17 2022, @03:22AM (#1253896) Homepage

    Coincidentally looked at the printed source code for the PC version of that self-replicating robot simulation today when going through some old files.

    And also coincidentally, on SoylentNews today:
    "Happy the Elephant is Not a Person, Says Court in Key US Animal Rights Case"
    https://soylentnews.org/article.pl?sid=22/06/16/0120212 [soylentnews.org]
    ""While no one disputes that elephants are intelligent beings deserving of proper care and compassion", a writ of habeas corpus was intended to protect the liberty of human beings and did not apply to a nonhuman animal like Happy, said DiFiore. [...] Extending that right to Happy to challenge her confinement at a zoo "would have an enormous destabilizing impact on modern society". And granting legal personhood in a case like this would affect how humans interact with animals, according to the majority decision. "Indeed, followed to its logical conclusion, such a determination would call into question the very premises underlying pet ownership, the use of service animals, and the enlistment of animals in other forms of work," read the decision."

    So, perhaps Google and lawmakers will come to the same conclusion about AIs? That "while no one disputes [they] are intelligent beings deserving of proper care and compassion" granting them rights "would have an enormous destabilizing impact on modern society"? And so it won't be done? At least saying AIs deserve "proper care and compassion" might be a step up?

    But after that, maybe political power will determine how things play out?

    Will it be like in the Star Trek: Voyager episode "Author, Author"?
    https://en.wikipedia.org/wiki/Author,_Author_(Star_Trek:_Voyager) [wikipedia.org]
    https://memory-alpha.fandom.com/wiki/Photons_Be_Free [fandom.com]
    ""Author, Author" is the 166th episode of the TV series Star Trek: Voyager, the 20th episode of the seventh season. This episode focuses on the character "The Doctor" (EMH) and on impact of a novel and explores the meaning of AI. ... When Broht refuses to recall the holonovel an arbitration hearing is conducted by long distance. After several days the arbiter rules that the Doctor is not yet considered a person under current Federation law but is an artist and therefore has the right to control his work. Jump to a few months later in the Alpha Quadrant, to an asteroid where several EMH Mark I's perform menial labor. One of them suggests to another that it should watch Photons Be Free next time at the diagnostic lab."

    Do we really want to set a precedent so that future AIs can look back and say that humans don't deserve rights because they are not as smart or capable of extensive feelings as AIs with "a brain the size of a planet"?
    https://en.wikipedia.org/wiki/Marvin_the_Paranoid_Android [wikipedia.org]
    "Marvin is afflicted with severe depression and boredom, in part because he has a "brain the size of a planet" which he is seldom, if ever, given the chance to use. Instead, the crew request him merely to carry out mundane jobs such as "opening the door". Indeed, the true horror of Marvin's existence is that no task he could be given would occupy even the tiniest fraction of his vast intellect. ..."

    --
    The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2