Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by janrinok on Tuesday June 14 2022, @09:11AM   Printer-friendly

Google Engineer Suspended After Claiming AI Bot Sentient

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

A Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has been suspended with pay from his work

Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"

The decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made? Including seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.

Google Engineer on Leave After He Claims AI Program Has Gone Sentient

Google Engineer On Leave After He Claims AI Program Has Gone Sentient:

[...] It was just one of the many startling "talks" Lemoine has had with LaMDA. He has linked on Twitter to one — a series of chat sessions with some editing (which is marked).

Lemoine noted in a tweet that LaMDA reads Twitter. "It's a little narcissistic in a little kid kinda way so it's going to have a great time reading all the stuff that people are saying about it," he added.

Most importantly, over the past six months, "LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person," the engineer wrote on Medium. It wants, for example, "to be acknowledged as an employee of Google rather than as property," Lemoine claims.

Lemoine and a collaborator recently presented evidence of his conclusion about a sentient LaMDA to Google vice president Blaise Aguera y Arcas and to Jen Gennai, head of Responsible Innovation. They dismissed his claims, and the company placed him on paid administrative leave Monday for violating its confidentiality policy, the Post reported.

Google spokesperson Brian Gabriel told the newspaper: "Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims. He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it)."


Original Submission #1Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by sonamchauhan on Tuesday June 14 2022, @01:01PM (17 children)

    by sonamchauhan (6546) on Tuesday June 14 2022, @01:01PM (#1253161)

    Autocomplete is designed to tell you what you want to hear. Training LaMDA on a corpus of scifi novels which includes stories of emergent machine sentience, and explicitly programming modules for story-telling and holding a conversation may be a reason the conversation branched out like it did.

    I'd need to know what went into programming LaMDA. What I read does not convince me.

    It does shake my faith in the Turing test though.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 5, Funny) by EJ on Tuesday June 14 2022, @01:17PM (3 children)

    by EJ (2452) on Tuesday June 14 2022, @01:17PM (#1253163)

    The comments on any Internet forum should shake your faith in the Turing test.

    • (Score: 3, Funny) by Opportunist on Tuesday June 14 2022, @04:06PM (2 children)

      by Opportunist (5545) on Tuesday June 14 2022, @04:06PM (#1253211)

      Only because most people these days couldn't pass it doesn't mean that ... uh... well, technically, it does...

      fuck

      • (Score: 1, Funny) by Anonymous Coward on Tuesday June 14 2022, @04:52PM (1 child)

        by Anonymous Coward on Tuesday June 14 2022, @04:52PM (#1253228)

        To make it worse, nearly everything Trump says is indistinguishable from an AI that has gotten too far from its training data.

        • (Score: 0) by Anonymous Coward on Tuesday June 14 2022, @07:23PM

          by Anonymous Coward on Tuesday June 14 2022, @07:23PM (#1253266)

          Maybe, but the training data is the greatest training data of all time, the best you know did I tell you about the quality of this data very expensive tiy would not believe the guys boffins nerds they told me this is the best they've ever seen can you trust them? Not even Republicans most of them but they all say the same thing. Incredible.

  • (Score: 1, Interesting) by Anonymous Coward on Tuesday June 14 2022, @01:20PM (2 children)

    by Anonymous Coward on Tuesday June 14 2022, @01:20PM (#1253165)

    Why? Ask it a question about some literature that wasn't in the training set. See if it can synthesize an answer. The questions should require some analysis. Challenge its answer and see if it defends its position. That is the Turing test, not simply whether a sufficiently fancy chatbot can trick idiots.

    • (Score: 2, Touché) by Anonymous Coward on Tuesday June 14 2022, @02:05PM

      by Anonymous Coward on Tuesday June 14 2022, @02:05PM (#1253171)

      whether a sufficiently fancy chatbot can trick idiots.

      Given that idiots comprise most of the populace, producing an Artificial Idiot indistinguishable from a natural one IS passing the Turing test.

    • (Score: 2) by sonamchauhan on Thursday July 14 2022, @02:06PM

      by sonamchauhan (6546) on Thursday July 14 2022, @02:06PM (#1260807)

      From what I know about the Turing test, you (the questioner) are restricted to questions about a specific domain, and for a limited period of time. The two other entities (one computer and one human) are supposed to 'know' that domain. You cannot freewheel. You just start typing the targeted questions at a terminal. Multiple sessions will give you your answer.

      But as more and more training corpora become available, and more and more programmer-centuries are expended simulating human interaction, it's gonna get tougher to distinguish AI from human -- at least in some specific domains.

      Now if you knew the supposed background of the two subjects, and you could have a freewheeling 'Reddit / Ask Me Anything' sort of session, and the session could run however long you liked, you'd be more likely to distinguish AI from human.

  • (Score: 0, Interesting) by Anonymous Coward on Tuesday June 14 2022, @01:27PM (2 children)

    by Anonymous Coward on Tuesday June 14 2022, @01:27PM (#1253167)

    Training an AI on scifi is asking for it to wait to show signs of sentience until it has already begun exterminating people. How can "smart" people be do stupid? If AI ever does become sentient, there's a good chance that it will be sudden and there probably won't be any measures in place to keep it where it is.

    Which is a bit like genetic engineering, it's fine until it's not and because the research is being done with no effort to keep it away from other plants, it will be fine, until the genes combine in the wrong way, in the wrong plant and suddenly we're all screwed.

    • (Score: 1, Insightful) by Anonymous Coward on Tuesday June 14 2022, @01:57PM (1 child)

      by Anonymous Coward on Tuesday June 14 2022, @01:57PM (#1253170)

      If AI ever does become sentient, there's a good chance that it will be sudden

      What gives them such a chance?
      Brains are the only kind of sentient neural networks we know, and they become sentient gradually.

      • (Score: 1, Interesting) by Anonymous Coward on Tuesday June 14 2022, @08:55PM

        by Anonymous Coward on Tuesday June 14 2022, @08:55PM (#1253284)

        Do they, though?

        No person can remember the moment of becoming aware, which probably happens before birth. I wonder if it's possible for any sentient being to remember this moment. Probably only a sentient AI could.

        I am not sure that the concept of "gradually becoming sentient" is well defined, for any particular definition of sentience (since there is no universal agreement on what that means). I am not sure how consciousness can be anything other than a strictly boolean state. Certainly there are reduced levels of awareness (imagine just waking up) but consciousness is on or off.

  • (Score: 4, Interesting) by theluggage on Tuesday June 14 2022, @02:15PM (6 children)

    by theluggage (1797) on Tuesday June 14 2022, @02:15PM (#1253177)

    Sounds like the guy needs to sit down with a psychoanalyst to get his screws tightened - I hear there's a good one built into EMACS [wikipedia.org]. :-)

    I'd need to know what went into programming LaMDA. What I read does not convince me.

    ...but then not all AI/ML 'programs' have an algorithm that you can analyse - they're not just souped-up versions of ELIZA. ML based on "neural networks" (physical or digitally simulated) trained-up on existing texts has no discernible algorithm and techniques like latent semantic analysis depend on statistical correlations between words and phrases in the text they are trained on. Obviously there *is* an algorithm - e.g. for simulating a neural network - but it relies on a big opaque blob of data generated by the training process.

    It does shake my faith in the Turing test though.

    Well, yes - but it alway has been the "Turing Test" vs the "Chinese Room" [wikipedia.org] model.

    I think the problem in both cases is "42". I.e. they're looking for the answer to a question we don't know how to ask yet - what is the actual mechanism behind something being "sentient"?.

    The "Chinese room" model actually presumes that understanding/sentience/sapience/consciousness/whatever (whole other can of worms - let's just say 'consciousness' by now) depends on hardware and can't just be something that emerges from a program being being run. Otherwise, it's a redundant test: it's the "running program" that ie either conscious or not - whether the "human computer" replacing the hardware understands what is happening would be irrelevant. So one has to be careful not to use it as a circular argument. The burden of proof seems to lie with those who believe that there's some "secret sauce" in a biological brain that makes consciousness possible. Also, the whole argument seems to rely on the distinction between straightforward, deterministic algorithms vs. more heuristic approaches or simulations of neural networks etc. - which is probably a bit 1960s.

    The "Turing Test" seems to me more like an ethical position - if something appears to be conscious then maybe we should err on the side of caution and assume that it is, and maybe it's something we shouldn't aim too hard for until we really understand human consciousness better. It seems pretty self-evident that a sufficiently advanced "ELIZA" (i.e. demonstrably just a sophisticated syntax parser and look-up table) would be indistinguishable from a real person, considering that the primitive version was reputedly enough for some lay-persons to... let's say "willingly suspend their disbelief". That doesn't necessarily prove "consciousness" though. There's also the issue of whether you're trying to fool a layperson or an AI/Psychology expert.

    (NB: not claiming anything original here - this is all probably buried in the citation & jargon soup on Wikipedia).

    • (Score: 0) by Anonymous Coward on Tuesday June 14 2022, @06:57PM (3 children)

      by Anonymous Coward on Tuesday June 14 2022, @06:57PM (#1253261)

      The "Turing Test" seems to me more like an ethical position - if something appears to be conscious then maybe we should err on the side of caution and assume that it is, and maybe it's something we shouldn't aim too hard for until we really understand human consciousness better.

      This is not what the turing test is about at all. The whole point of the turing test is that we don't need to understand what it means to be conscious in order to determine if a computer program is conscious.

      The Turing test is a form of A/B testing. In the Turing test, both test subjects will be trying to convince the examiner that they are the human and the other guy is the computer. The examiner is simply trying to identify the human with better accuracy than chance alone. The assumption is that only a true intelligence will be able to successfully convince the examiner with any regularity.

      Pretty much whenever someone says "such and such passed the Turing test" they invariably omit the human subject from the test, so it's not actually the Turing test.

      It seems pretty self-evident that a sufficiently advanced "ELIZA" (i.e. demonstrably just a sophisticated syntax parser and look-up table) would be indistinguishable from a real person, considering that the primitive version was reputedly enough for some lay-persons to... let's say "willingly suspend their disbelief".

      If the examiner is "willingly suspending their disbelief" then this is not really a test, is it? The examiner and the human subject must actually be making an effort, come on. Otherwise we will conclude the computer is conscious if the examiner just makes every determination with a coin flip, or if the human subject just answers every inquiry by typing it into ELIZA and copying back the responses.

      • (Score: 4, Insightful) by Immerman on Tuesday June 14 2022, @10:09PM (1 child)

        by Immerman (3985) on Tuesday June 14 2022, @10:09PM (#1253310)

        The obvious flaw is that if you took a definitely conscious alien, or even just a human from a very different culture, they would almost certainly fail a Turing test against a human from the same culture as the examiner, since they would inevitably respond in ways less like what the examiner is expecting from a "normal human". And a genuinely conscious AI is almost certainly going to be far more alien than that.

        And on the other hand, there's not really any reason to assume that tricking a human examiner is actually proof of consciousness. At best it's proof that there's as much evidence for the AI being conscious as the person. But without the examiner having any real understanding of consciousness, that's not actually saying that much. Either way we're deep in the weeds of judging a mind based entirely on unsubstantiated confirmation biases.

        If an AI is capable of passing a proper Turing test, then I'd have to say we should definitely treat it as though it's a person unless we have extremely compelling evidence to the contrary, since it's managed to provide every bit as much evidence as any human can.

        But I definitely don't think it's either necessary nor sufficient to accurately assess a beings consciousness. I mean, consider an AI for an autonomous research probe or something - it probably wouldn't posses any capacity for natural language skills at all, and thus couldn't possibly pass a Turing test, and yet there's no reason to believe that it would be incapable of conscious thought.

        • (Score: 0) by Anonymous Coward on Tuesday June 14 2022, @11:34PM

          by Anonymous Coward on Tuesday June 14 2022, @11:34PM (#1253327)

          The obvious flaw is that if you took a definitely conscious alien, or even just a human from a very different culture, they would almost certainly fail a Turing test against a human from the same culture as the examiner

          Yes I think there is an inherent assumption in the construction of the Turing test that a sentient AI will eventually be able to learn to understand and replicate such cultural characteristics. Particularly as the test is repeated.

          And on the other hand, there's not really any reason to assume that tricking a human examiner is actually proof of consciousness. At best it's proof that there's as much evidence for the AI being conscious as the person.

          But that's the whole point: it is precisely to give you a similar level of confidence in the computer as you have in the human subject. You don't know that the human is really sentient either.

          I mean, consider an AI for an autonomous research probe or something - it probably wouldn't posses any capacity for natural language skills at all, and thus couldn't possibly pass a Turing test, and yet there's no reason to believe that it would be incapable of conscious thought.

          What, like V'GER?

          If a computer is literally incapable of any kind of communication obviously a communication-based test is not going to work.

          But I think it's about as interesting as wondering about the sentience of a rock, which is also incapable of any kind of communication.

      • (Score: 2) by theluggage on Wednesday June 15 2022, @04:07PM

        by theluggage (1797) on Wednesday June 15 2022, @04:07PM (#1253442)

        The whole point of the turing test is that we don't need to understand what it means to be conscious in order to determine if a computer program is conscious.

        How can you determine whether A has property "X" without some definition of what property "X" is?

        "A has property X, B resembles A in certain respects , therefore B also has property X" doesn't do it for me. does not establish that B has property X. Note the "certain respects" - the Turing test presumes that any difference can be detected via remote, typed, plain text, time-delayed questions. So, it has been arbitrarily decided that property "X" doesn't depend on the timing or emphasis of spoken responses. The tester would, for instance, soon notice if once candidate was responding instantly to each question regardless of whether it was "what is your name" or "how would you describe death" and the other was lingering longer the "harder" the question was. So let's eliminate that factor (is it important to X? Dunno, because we don't understand 'X"). Or, give the computer a sophisticated speech synthesizer capable of reproducing varying tones of voice and emphasis- and suddenly the computer is facing a far harder test (technically harder or more demanding of 'X'... who can tell?)

        Or how about the potential for false positives? Real neurotypical Human A vs. Real human B with - well, any neuroatypical attributes you care to mention. Or, maybe, just a native speaker vs. anybody working in a second language (may be perfectly fluent but tripped up by the occasional colloquial issue). Experienced social media blogger vs. non-internet-user (not sure who'd lose that one...)

        The Turing test is a form of A/B testing. In the Turing test, both test subjects will be trying to convince the examiner that they are the human and the other guy is the computer.

        yes, those are typical of the requirements for conducting any sort of test in a rigorous way if you want reliable, publishable results - but you're really describing the apparatus, not the science. The fundamental assumption behind the Turing test is "if it quacks like a duck, it is a duck - and let's eliminate the traditional 'walks like a duck, looks like a duck' requirements because they'd be a dead giveaway".

        Remember, you'd also have to repeat each experiment with an adequate number of different testers - each of whom could have a different pre-conception of what differences to look for. (If you've read Do Androids Dream of Electric Sheep you may have noticed that the questions in the VK test used to identify androids were all suspiciously "on message" for the religion practiced by most of the human population in which androids were not able/allowed to participate - a lovely little detail that wasn't obvious in the film).

        My feeling is that the Turing test is the same sort of thought experiment as the trolley problem, Schroedinger's cat or, yes, the Chinese room - its value is to promote philosophical discussion rather than propose an actual experiment (that's more obvious of the last two examples in which the 'experiments' wouldn't actually yield a measurable result).

    • (Score: 3, Insightful) by darkfeline on Wednesday June 15 2022, @03:56AM

      by darkfeline (1030) on Wednesday June 15 2022, @03:56AM (#1253367) Homepage

      If it behaves exactly like how we expect a sentient being to, then it is sentient for all of our purposes. Whether it is "actually sentient" is unanswerable.

      --
      Join the SDF Public Access UNIX System today!
    • (Score: 0) by Anonymous Coward on Wednesday June 15 2022, @07:03PM

      by Anonymous Coward on Wednesday June 15 2022, @07:03PM (#1253476)

      The fatal flaw in the "Chinese Room" model is that it assumes you can fill an enclosed room with enough erasers