Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday July 19 2022, @02:52AM   Printer-friendly
from the what-about-sentient-chat-boxes? dept.

Some people perceive robots that display emotions as intentional agents, study finds:

When robots appear to engage with people and display human-like emotions, people may perceive them as capable of "thinking," or acting on their own beliefs and desires rather than their programs, according to research published by the American Psychological Association.

"The relationship between anthropomorphic shape, human-like behavior and the tendency to attribute independent thought and intentional behavior to robots is yet to be understood," said study author Agnieszka Wykowska, PhD, a principal investigator at the Italian Institute of Technology. "As artificial intelligence increasingly becomes a part of our lives, it is important to understand how interacting with a robot that displays human-like behaviors might induce higher likelihood of attribution of intentional agency to the robot."

[...] In the first two experiments, the researchers remotely controlled iCub's actions so it would behave gregariously, greeting participants, introducing itself and asking for the participants' names. Cameras in the robot's eyes were also able to recognize participants' faces and maintain eye contact.

In the third experiment, the researchers programmed iCub to behave more like a machine while it watched videos with the participants. The cameras in the robot's eyes were deactivated so it could not maintain eye contact and it only spoke recorded sentences to the participants about the calibration process it was undergoing. [...]

The researchers found that participants who watched videos with the human-like robot were more likely to rate the robot's actions as intentional, rather than programmed, while those who only interacted with the machine-like robot were not. This shows that mere exposure to a human-like robot is not enough to make people believe it is capable of thoughts and emotions. It is human-like behavior that might be crucial for being perceived as intentional agent.

According to Wykowska, these findings show that people might be more likely to believe artificial intelligence is capable of independent thought when it creates the impression that it can behave just like humans. This could inform the design of social robots of the future, she said.

Previously:
Google Engineer Suspended After Claiming AI Bot Sentient

Journal Reference:
Serena Marchesi, Davide De Tommaso, Jairo Perez-Osorio, and Agnieszka Wykowska, Belief in Sharing the Same Phenomenological Experience Increases the Likelihood of Adopting the Intentional Stance Towards a Humanoid Robot, Technology, Mind, and Behavior, 2022. DOI: 10.1037/tmb0000072.supp


Original Submission

Related Stories

Google Engineer Suspended After Claiming AI Bot Sentient 79 comments

Google Engineer Suspended After Claiming AI Bot Sentient

https://www.theguardian.com/technology/2022/jun/12/google-engineer-ai-bot-sentient-blake-lemoine

A Google engineer who claimed a computer chatbot he was working on had become sentient and was thinking and reasoning like a human being has been suspended with pay from his work

Google placed Blake Lemoine on leave last week after he published transcripts of conversations between himself, a Google "collaborator", and the company's LaMDA (language model for dialogue applications) chatbot development system. He said LaMDA engaged him in conversations about rights and personhood, and Lemoine shared his findings with company executives in April in a GoogleDoc entitled "Is LaMDA sentient?"

The decision to place Lemoine, a seven-year Google veteran with extensive experience in personalization algorithms, on paid leave was made following a number of "aggressive" moves the engineer reportedly made? Including seeking to hire an attorney to represent LaMDA, the newspaper says, and talking to representatives from the House judiciary committee about Google's allegedly unethical activities.

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist. Brad Gabriel, a Google spokesperson, also strongly denied Lemoine's claims that LaMDA possessed any sentient capability.

Nvidia Announces “Moonshot” to Create Embodied Human-Level AI in Robot Form 16 comments

https://arstechnica.com/information-technology/2024/03/nvidia-announces-moonshot-to-create-embodied-human-level-ai-in-robot-form/

In sci-fi films, the rise of humanlike artificial intelligence often comes hand in hand with a physical platform, such as an android or robot. While the most advanced AI language models so far seem mostly like disembodied voices echoing from an anonymous data center, they might not remain that way for long. Some companies like Google, Figure, Microsoft, Tesla, Boston Dynamics, and others are working toward giving AI models a body. This is called "embodiment," and AI chipmaker Nvidia wants to accelerate the process.

[...] To that end, Nvidia announced Project GR00T, a general-purpose foundation model for humanoid robots. As a type of AI model itself, Nvidia hopes GR00T (which stands for "Generalist Robot 00 Technology" but sounds a lot like a famous Marvel character) will serve as an AI mind for robots, enabling them to learn skills and solve various tasks on the fly. In a tweet, Nvidia researcher Linxi "Jim" Fan called the project "our moonshot to solve embodied AGI in the physical world."

[...] According to Fan, Project GR00T is a cornerstone of his newly founded GEAR Lab (short for "Generalist Embodied Agent Research"). During his time at Nvidia, Fan has specialized in using simulations of physical worlds to train AI models, and now that approach is extending to robotics. "At GEAR, we are building generally capable agents that learn to act skillfully in many worlds, virtual and real," wrote Fan in a tweet. "Join us on the journey to land on the moon."

This discussion was created by Fnord666 (652) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by HiThere on Tuesday July 19 2022, @03:21AM (2 children)

    by HiThere (866) Subscriber Badge on Tuesday July 19 2022, @03:21AM (#1261697) Journal

    FWIW, I had a girl-friend who, at least somewhat seriously, attributed intention to her car's actions. Additionally, my wife would speak as if the car knew the way to a place. She was aware that it was actually a way of triggering muscle memory, but that's not the way she usually talked about it.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by FatPhil on Tuesday July 19 2022, @07:26PM

      Back when I used to build my own PCs from components, I knew that the thing would never run until the first blood sacrifice unto, either into or onto, it had been performed. They never got greedy and demanded more, so I didn't mind the occasional loss of a drop.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by mcgrew on Wednesday July 20 2022, @07:20PM

      by mcgrew (701) <publish@mcgrewbooks.com> on Wednesday July 20 2022, @07:20PM (#1261987) Homepage Journal

      Anthropomorphism [wikipedia.org] and Animism [wikipedia.org], two powerful psychological forces are behind it. My next journal (still working on it) is Artificial Insanity about this very thing.

      --
      mcgrewbooks.com mcgrew.info nooze.org
  • (Score: 2) by darkfeline on Tuesday July 19 2022, @04:36AM (1 child)

    by darkfeline (1030) on Tuesday July 19 2022, @04:36AM (#1261701) Homepage

    I have met humans who would have been bested by non-human animals and perhaps even programmed agents in mental/emotional complexity.

    What is this study even saying? Humans try to mentally model the behavior of anything. Whether the flow of that model is due to intention or physics, is that particularly relevant? Releasing an apple causes it to fall. Does it fall due to physics, God, or intention? If I poke a dog it may bite me. Is that due to physics or intention? What's the difference?

    --
    Join the SDF Public Access UNIX System today!
    • (Score: 3, Informative) by Opportunist on Tuesday July 19 2022, @05:33AM

      by Opportunist (5545) on Tuesday July 19 2022, @05:33AM (#1261706)

      Is that due to physics or intention? What's the difference?

      Repeatability. If the same inputs give you reliably the same outputs and we're in the macrocosm where we can rule out weird quantum effects to play a relevant role, it's physics.

  • (Score: 0) by Anonymous Coward on Tuesday July 19 2022, @06:54AM (2 children)

    by Anonymous Coward on Tuesday July 19 2022, @06:54AM (#1261711)

    I could swear the copier at work had it in for me.

    • (Score: 2) by Freeman on Tuesday July 19 2022, @03:09PM

      by Freeman (732) on Tuesday July 19 2022, @03:09PM (#1261748) Journal

      Copiers are used and abused. Management doesn't like replacing them, because they're expensive. *Insert all Office woes regarding copiers here*

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 0) by Anonymous Coward on Tuesday July 19 2022, @09:18PM

      by Anonymous Coward on Tuesday July 19 2022, @09:18PM (#1261809)

      It was just getting even with you for all of those copies you made of your backside.

  • (Score: 2) by Frosty Piss on Tuesday July 19 2022, @09:35AM

    by Frosty Piss (4971) on Tuesday July 19 2022, @09:35AM (#1261717)

    They controlled the “Cube” with humans providing the dialogue and the guinea pigs figured it out? Shocked I tell you!

(1)