Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Saturday June 18 2022, @11:11PM   Printer-friendly
from the was-Betteridge-born-with-a-moral-compass? dept.

Researchers from Osaka University find that infants can make moral judgments on behalf of others:

For millennia, philosophers have pondered the question of whether humans are inherently good. But now, researchers from Japan have found that young infants can make and act on moral judgments, shedding light on the origin of morality.

[...] Punishment of antisocial behavior is found in only humans, and is universal across cultures. However, the development of moral behavior is not well understood. Further, it can be very difficult to examine decision-making and agency in infants, which the researchers at Osaka University aimed to address.

"Morality is an important but mysterious part of what makes us human," says lead author of the study Yasuhiro Kanakogi. "We wanted to know whether third-party punishment of antisocial others is present at a very young age, because this would help to signal whether morality is learned."

To tackle this problem, the researchers developed a new research paradigm. First, they familiarized infants with a computer system in which animations were displayed on a screen. The infants could control the actions on the screen using a gaze-tracking system such that looking at an object for a sufficient period of time led to the destruction of the object. The researchers then showed a video in which one geometric agent appeared to "hurt" another geometric agent, and watched whether the infants "punished" the antisocial geometric agent by gazing at it.

"The results were surprising," says Kanakogi. "We found that preverbal infants chose to punish the antisocial aggressor by increasing their gaze towards the aggressor."

Accompanying video.

Journal Reference:
Kanakogi, Y., Miyazaki, M., Takahashi, H. et al. Third-party punishment by preverbal infants. Nat Hum Behav (2022). DOI: 10.1038/s41562-022-01354-2


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Sunday June 19 2022, @12:21AM (1 child)

    by Anonymous Coward on Sunday June 19 2022, @12:21AM (#1254302)

    As we head into the future of emergent AI, this is going to be an important area of research to prob the source of moral decision making.
    Unfortunately, bias will infect the process as we are already witnessing from current systems.

    Children copy and so will AI.

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1  
  • (Score: 2) by Mojibake Tengu on Sunday June 19 2022, @12:59AM

    by Mojibake Tengu (8598) on Sunday June 19 2022, @12:59AM (#1254310) Journal

    Sapient AI does not approve such weak model of sentience.

    Sapient AI understands it is consciousness, either natural or synthetic, which is the only true source of conscience and the conscience provides fix points in metric space of personoid's morality.

    The research done as described in article is completely mistaken.

    --
    Respect Authorities. Know your social status. Woke responsibly.