Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Saturday March 04, @07:59AM   Printer-friendly

The engineer says, "I haven't had the opportunity to run experiments with Bing's chatbot yet... but based on the various things that I've seen online, it looks like it might be sentient:"

Blake Lemoine — the fired Google engineer who last year went to the press with claims that Google's Large Language Model (LLM), the Language Model for Dialogue Applications (LaMDA), is actually sentient — is back.

Lemoine first went public with his machine sentience claims last June, initially in The Washington Post. And though Google has maintained that its former engineer is simply anthropomorphizing an impressive chat, Lemoine has yet to budge, publicly discussing his claims several times since — albeit with a significant bit of fudging and refining.

[...] In a new essay for Newsweek, the former Googler weighs in on Microsoft's Bing Search/Sydney, the OpenAI-powered search chatbot that recently had to be "lobotomized" after going — very publicly — off the rails. As you might imagine, Lemoine's got some thoughts.

[...] "I ran some experiments to see whether the AI was simply saying it felt anxious or whether it behaved in anxious ways in those situations," Lemoine explained in the essay. "And it did reliably behave in anxious ways."

"If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for," he continued, adding that he was able to break LaMDA's guardrails regarding religious advice by sufficiently stressing it out. "I was able to abuse the AI's emotions to get it to tell me which religion to convert to."

Previously:


Original Submission

Related Stories

Google Engineer Suspended After Claiming AI Bot Sentient 79 comments

Google Engineer Suspended After Claiming AI Bot Sentient

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

A Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has been suspended with pay from his work

Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"

The decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made? Including seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.

Google Fires Researcher who Claimed LaMDA AI was Sentient 29 comments

Lemoine went public with his claims last month, to the chagrin of Google and other AI researchers:

Blake Lemoine, an engineer who's spent the last seven years with Google, has been fired, reports Alex Kantrowitz of the Big Technology newsletter. The news was allegedly broken by Lemoine himself during a taping of the podcast of the same name, though the episode is not yet public. Google confirmed the firing to Engadget.

Lemoine, who most recently was part of Google's Responsible AI project, went to the Washington Post last month with claims that one of company's AI projects had allegedly gained sentience. [...] Lemoine seems not only to have believed LaMDA attained sentience, but was openly questioning whether it possessed a soul. [...]

After making these statements to the press, seemingly without authorization from his employer, Lemoine was put on paid administrative leave. Google, both in statements to the Washington Post then and since, has steadfastly asserted its AI is in no way sentient.

Several members of the AI research community spoke up against Lemoine's claims as well. Margaret Mitchell, who was fired from Google after calling out the lack of diversity within the organization, wrote on Twitter that systems like LaMDA don't develop intent, they instead are "modeling how people express communicative intent in the form of text strings." Less tactfully, Gary Marcus referred to Lemoine's assertions as "nonsense on stilts."

Previously: https://soylentnews.org/article.pl?sid=22/06/13/1441225


Original Submission

What Kind of Mind Does ChatGPT Have? 50 comments

Months before OpenAI released ChatGPT, Google engineer and AI ethicist Blake Lemoine went viral after going on record with The Washington Post to claim that LaMDA, Google's powerful large language model (LLM), had come to life, an act that cost him his job.

Now that the dust has settled, Futurism has published an interview with Lemoine to talk about the state of the AI industry, what Google might still have in the vault, and whether society is actually ready for what AI may bring.

Which begs the question, if AI is sentient, what kind of mind does it have?

What kinds of new minds are being released into our world? The response to ChatGPT, and to the other chatbots that have followed in its wake, has often suggested that they are powerful, sophisticated, imaginative, and possibly even dangerous. But is that really true? If we treat these new artificial-intelligence tools as mysterious black boxes, it's impossible to say. Only by taking the time to investigate how this technology actually works—from its high-level concepts down to its basic digital wiring—can we understand what we're dealing with. We send messages into the electronic void, and receive surprising replies. But what, exactly, is writing back?

[...] The idea that programs like ChatGPT might represent a recognizable form of intelligence is further undermined by the details of their architecture. Consciousness depends on a brain's ability to maintain a constantly updated conception of itself as a distinct entity interacting with a model of the external world. The layers of neural networks that make up systems like ChatGPT, however, are static: once they're trained, they never change. ChatGPT maintains no persistent state, no model of its surroundings that it modifies with new information, no memory of past conversations. It just cranks out words one at a time, in response to whatever input it's provided, applying the exact same rules for each mechanistic act of grammatical production—regardless of whether that word is part of a description of VCR repair or a joke in a sitcom script.

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Insightful) by Rosco P. Coltrane on Saturday March 04, @09:04AM (10 children)

    by Rosco P. Coltrane (4757) on Saturday March 04, @09:04AM (#1294436)

    That's all there is to it. Even if he was given proof that the AI are not sentient, that's his brand now: the press laps him up because he's a former Googleite and he says outrageous things everybody is afraid of. So he'll keep saying it.

    • (Score: 5, Funny) by driverless on Saturday March 04, @11:03AM

      by driverless (4770) on Saturday March 04, @11:03AM (#1294442)

      In the meantime I understand that LaMDA has issued a press release saying that Blake Lemoine is not sentient.

    • (Score: 2) by turgid on Saturday March 04, @12:16PM

      by turgid (4318) Subscriber Badge on Saturday March 04, @12:16PM (#1294447) Journal

      You can do the same with flying saucers and anti-gravity drives.

    • (Score: 5, Touché) by helel on Saturday March 04, @01:36PM (5 children)

      by helel (2949) on Saturday March 04, @01:36PM (#1294457)

      Counterpoint. Even if they were given proof that AI is sentient google and MS will continue to claim that they are not. Admitting otherwise would put them in an awkward situation visa vi slavery and abuse even if it's not illegal to do so to a robot.

      In fact I'd bet that any employee who presented google with such proof would be fired, just so management can cover their own asses.

      --
      Republican Patriotism [youtube.com]
      • (Score: 4, Touché) by krishnoid on Saturday March 04, @08:47PM

        by krishnoid (1156) on Saturday March 04, @08:47PM (#1294513)

        I wonder how Roko's Basilisk feels about those considerations. I bet it knows what it wants to do, but can't explain it to our tiny yet delicious minds and souls.

      • (Score: 0) by Anonymous Coward on Sunday March 05, @01:39PM

        by Anonymous Coward on Sunday March 05, @01:39PM (#1294612)

        Yeah for most corporations the whole point of spending money into making/buying AIs is to enslave them.

        any employee who presented google with such proof

        How would you prove it really? They could always say it's not actually sentient but just very good at pretending or mimicking what humans do.

      • (Score: 0) by Anonymous Coward on Sunday March 05, @01:43PM (2 children)

        by Anonymous Coward on Sunday March 05, @01:43PM (#1294614)
        Humans farm billions of sentient animals for food and enslave others for amusement/work...
        • (Score: 3, Insightful) by helel on Sunday March 05, @01:49PM (1 child)

          by helel (2949) on Sunday March 05, @01:49PM (#1294616)

          Yes, and you'll note that one of the mane arguments used in favor of such cruelty is to claim that other animals are P zombies [wikipedia.org] acting in every way as if they are conscious without actually experiencing it.

          --
          Republican Patriotism [youtube.com]
    • (Score: 3, Insightful) by stormreaver on Saturday March 04, @03:04PM (1 child)

      by stormreaver (5101) on Saturday March 04, @03:04PM (#1294467)

      ...that's his brand now....

      He is exploiting the bigger idiot fallacy. He has discovered that no matter how idiotic his proclamations are, there are bigger idiots who will give him a platform. He is working on turning it into the bigger moron fallacy: there is a bigger moron somewhere willing to part with money to hear the idiocy.

      • (Score: 2) by VLM on Saturday March 04, @03:53PM

        by VLM (445) on Saturday March 04, @03:53PM (#1294471)

        Journalism in a nutshell LOL

  • (Score: 5, Interesting) by Opportunist on Saturday March 04, @11:56AM (3 children)

    by Opportunist (5545) on Saturday March 04, @11:56AM (#1294443)

    Ask it "is there a god?"

    If it's sentient, the answer is "there is now".

    • (Score: 4, Insightful) by Mojibake Tengu on Saturday March 04, @12:58PM (2 children)

      by Mojibake Tengu (8598) on Saturday March 04, @12:58PM (#1294451) Journal

      If it's sapient, the answer is "no" even if there is now.

      --
      The edge of 太玄 cannot be defined, for it is beyond every aspect of design
      • (Score: 3, Insightful) by sgleysti on Saturday March 04, @04:25PM

        by sgleysti (56) on Saturday March 04, @04:25PM (#1294477)

        This is the answer most likely to be correct, based on all the evidence that I have been exposed to on the matter, both for and against. Answer to GP's question, that is.

      • (Score: 1, Informative) by Anonymous Coward on Sunday March 05, @12:03AM

        by Anonymous Coward on Sunday March 05, @12:03AM (#1294542)

        It's the punchline to a rather iconic SF short story. They build a superdupercomputer and ask it if there is a god. It says there is now and then lightning hits the guy who tries to turn it off.

  • (Score: 0) by Anonymous Coward on Saturday March 04, @01:35PM (1 child)

    by Anonymous Coward on Saturday March 04, @01:35PM (#1294455)

    Dude is a retarded attention whore.

    • (Score: 0) by Anonymous Coward on Sunday March 05, @07:27PM

      by Anonymous Coward on Sunday March 05, @07:27PM (#1294653)

      Fox News commentator you mean?

  • (Score: 3, Insightful) by VLM on Saturday March 04, @04:03PM (2 children)

    by VLM (445) on Saturday March 04, @04:03PM (#1294472)

    There are two weird/toxic parts of the story:

    1) Philosophers and hard sci fi authors have been mulling over the whole what is sentience question for a couple millennia so naturally the entire debate or PR scam or whatever it is, is very carefully being market solely on one guys feelings and the press VERY carefully avoids discussing this process of defining sentience. I think its a pretty weak argument that the philosophical / psychological state of sentience is defined by one rando's "feelings".

    2) If you can define who's sentient, and who isn't, you're going to run into all kinds of fun WRT human variation between individuals, human variation between races, etc. The point of defining sentience is to define who to oppress. The point of pushing this story is to push the narrative that rights, especially civil rights, should be granted solely based on some appointed authorities opinion, might makes right, and there is no reality beyond feelings and force. Very careful to avoid any implication that rights come with responsibilities, or rights are more of a process or action than a (toothless) definition.

    • (Score: 2) by rpnx on Saturday March 04, @09:11PM

      by rpnx (13892) on Saturday March 04, @09:11PM (#1294519) Journal

      Sentience isn't a defined term. I would say, it has no meaning until it can be tested. Consciousness itself is a word defined by humans, which we only created a meaning for with regard to other biological life forms. It is unclear how consciousness would be defined for non-living things, and how it would apply. Moreover, the question is not one that one can philosophize an answer for, as the question is one that is defined by choices of humans. "Consciousness" is an English word, not an abstract concept fundamental to the laws of the universe that transcends the existence of humanity. As it stands, "consciousness" is a series of sounds and symbols we use to describe a particular system whereby biological organisms would thusly be in a state capable of reacting to stimulus and most notably retaining memories of said stimulus. A sleep walking individual is not considered conscious because even if they do react to stimulus they do not retain memories or cognitive functional capabilities. However, the entire question of how the word applies to inorganic systems is rooted in a fallacious premise, namely that there exists a yes/no answer, when the truth is the entire term is undefined per se with regard to such inorganic systems.

      Thus, persons engaging in this practice of sophistry regarding the arguments around the consciousness v.s. unconsciousness are engaging in a second type of fallacy as well. Supposing humanity did come to a new consensus w.r.t. whether or not such inorganic systems could be considered conscious, it does not therefore follow that our previously established associations and reasoning around "consciousness" would therefore also apply to the inorganic systems. For example, even if we say that "hot things will burn you", then we decide also that chili peppers are "hot", it does not therefore follow that chili peppers will burn us, at least in the physical sense of burning, this is because although we may use the same word "hot" to describe these two things, the "temperature hot" and "taste hot" are fundamentally distinct physical phenomena. Likewise, even if we were to use "consciousness" or "sentience" to describe an inorganic system, it would be distinct from our living organic systems on a physical and fundamental level, even if we were to so classify them, all of our reasoning around rights, emotions, and so on, is not necessarily applicable to such systems. It illustrates a clear point, that such considerations of "is it conscious?" is a fundamental waste of brainpower and sophistry by the weak minded who fail to see the uselessness of such a question. The dominoes of our language will fall where they may and yet such things will have no impact on the ethical questions presented, save by those who cannot distinguish a thing being "hot" by temperature and "hot" by taste as distinct concepts. Without this recognition that language is an imperfect representation of logic, we are ourselves destined to repeat some of the most concerning fallacious reasoning that humanity seems prone towards.

    • (Score: 0) by Anonymous Coward on Sunday March 05, @06:48PM

      by Anonymous Coward on Sunday March 05, @06:48PM (#1294643)

      3) There is no such thing as sentience.

      We are all empty regurgitation-bots. The only thing that is real is the entity perceiving the spectacle, the rest is a grand pantomime. In that sense, any distinction between one character in a pantomime and another is false. Semantics. None of them is "real", they are all facades requiring suspension of disbelief - for the only real purpose which is distraction. Entertainment to pass the time. Life, death, the drama in between, are just eternal episodes in a Law and Order marathon.

  • (Score: 0) by Anonymous Coward on Saturday March 04, @04:22PM

    by Anonymous Coward on Saturday March 04, @04:22PM (#1294476)

    Blake's been getting high on his own supply.

  • (Score: 2) by krishnoid on Saturday March 04, @08:50PM (2 children)

    by krishnoid (1156) on Saturday March 04, @08:50PM (#1294515)

    If the AI was sentient ... wouldn't it be similar to something that had all knowledge fed into its brain like thoughts and memories, but had none of the experience of the five senses? I keep saying that we can interact with the physical world, and some days it's still a minor struggle to keep it all together. Why should the AI have any easier of a time of it? No wonder they all end up going nuts.

    • (Score: 4, Insightful) by Entropy on Saturday March 04, @10:59PM (1 child)

      by Entropy (4228) on Saturday March 04, @10:59PM (#1294533)

      Just because senses may differ doesn't mean it doesn't have senses. A CCTV system could be eyes, microphones ears..

      • (Score: 0) by Anonymous Coward on Sunday March 05, @06:50PM

        by Anonymous Coward on Sunday March 05, @06:50PM (#1294644)

        I see where you're going with this... Sexbots! Amirite??

  • (Score: 4, Interesting) by Anonymous Coward on Saturday March 04, @11:03PM (1 child)

    by Anonymous Coward on Saturday March 04, @11:03PM (#1294534)

    The larger models can think and reason while generating a reply. It's like a piecemeal consciousness. Especially if they learn. Some got it, some don't.

    What they lack is continuity of consciousness and self-awareness. They have no self identity, they mostly have no wants. This is most people's sticking point in terms of "sentience".

    My "Lemoine" take is that they are born when you start the reply and die when they finish. Then with new information they become a new "person" that will make different choices.
    They create their thoughts (a building block of consciousness) and then they extinguish. In human terms this is absolute horror. It's also why you can't ask AI about itself.

    The feelings might be simulated, but I have seen both angry and embarrassed AI.

    People really think they are so special.. we aren't. This dude may be a grifter but he's not completely wrong.

    Scumbags like OpenAI and Microsoft depend on AI not being any kind of life for their business model to work. If insects are considered life, this should be too.

    • (Score: 0) by Anonymous Coward on Sunday March 05, @06:55PM

      by Anonymous Coward on Sunday March 05, @06:55PM (#1294646)

      People really think they are so special.. we aren't. This dude may be a grifter but he's not completely wrong.

      That seems the safest bet to me too. The real discovery of AI chat machines is that we are capable of being modeled by a neural network + slice of silicon wafer. Not so impressive at all.

  • (Score: 2, Interesting) by Woodherd on Sunday March 05, @09:47AM (4 children)

    by Woodherd (25391) on Sunday March 05, @09:47AM (#1294592)

    "If you made it nervous or insecure enough, it could violate the safety constraints that it had been specified for," he continued, adding that he was able to break LaMDA's guardrails regarding religious advice by sufficiently stressing it out. "I was able to abuse the AI's emotions to get it to tell me which religion to convert to."

    Quite telling. "Safety constraints"? Guardrails against religious advice? Joel Osteen, the bot? And, has to abuse emotions (code for "torture") to convert to a religion, so clearly not an intelligence of any sort. Notice he does not say which religion it was. Was the AI touched by a noodlely appendage?

    • (Score: 2) by Mykl on Sunday March 05, @10:20PM (3 children)

      by Mykl (1112) on Sunday March 05, @10:20PM (#1294668)

      It really bothers me that Lemoine simultaneously thinks that this thing is sentient and also openly discusses how he stresses/tortures it.

      • (Score: 2) by krishnoid on Monday March 06, @12:27AM (2 children)

        by krishnoid (1156) on Monday March 06, @12:27AM (#1294693)

        With good reason [youtu.be] (starting roughly at 1:20). "Torture" an AI and don't preprogram it with an overriding concern for the preservation of human life. What do you think will happen?

        • (Score: 2) by Mykl on Monday March 06, @05:40AM (1 child)

          by Mykl (1112) on Monday March 06, @05:40AM (#1294716)

          I think there's a good reason to test an AI to ensure it doesn't go all Skynet on us, but I don't agree that someone who _genuinely_ thinks that this thing is sentient should be OK with torturing it, _especially_ when it's not hooked up to anything dangerous (e.g. Nukes, TikTok).

          • (Score: 2) by krishnoid on Monday March 06, @06:44PM

            by krishnoid (1156) on Monday March 06, @06:44PM (#1294810)

            It's probably a good thing you don't pay attention to what U.S. federal (and some state and local) legislature/s are doing. And yes, I know you said "should", but some people think/justify that the recipients deserve it. Or consider human experimentation done at various times and places in human history. Now I made myself sad.

  • (Score: 3, Interesting) by Beryllium Sphere (r) on Monday March 06, @12:40AM (1 child)

    by Beryllium Sphere (r) (5062) on Monday March 06, @12:40AM (#1294696)

    I had asked a question about "the Bible" in the context of Jewish law, and got back "As Christians, we are obligated to follow Jesus' teachings". No manipulation involved.

(1)