Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday November 29 2021, @10:10AM   Printer-friendly
from the how-well-do-people-make-these-decisions? dept.

[NB: The following article makes reference to oft-cited Trolley problem. Highly recommended.--martyb/Bytram]

The self-driving trolley problem: How will future AI systems make the most ethical choices for all of us?:

Imagine a future with self-driving cars that are fully autonomous. If everything works as intended, the morning commute will be an opportunity to prepare for the day's meetings, catch up on news, or sit back and relax.

But what if things go wrong? The car approaches a traffic light, but suddenly the brakes fail and the computer has to make a split-second decision. It can swerve into a nearby pole and kill the passenger, or keep going and kill the pedestrian ahead.

The computer controlling the car will only have access to limited information collected through car sensors, and will have to make a decision based on this. As dramatic as this may seem, we're only a few years away from potentially facing such dilemmas.

Autonomous cars will generally provide safer driving, but accidents will be inevitable—especially in the foreseeable future, when these cars will be sharing the roads with human drivers and other road users.

Tesla does not yet produce fully autonomous cars, although it plans to. In collision situations, Tesla cars don't automatically operate or deactivate the Automatic Emergency Braking (AEB) system if a human driver is in control.

In other words, the driver's actions are not disrupted—even if they themselves are causing the collision. Instead, if the car detects a potential collision, it sends alerts to the driver to take action.

In "autopilot" mode, however, the car should automatically brake for pedestrians. Some argue if the car can prevent a collision, then there is a moral obligation for it to override the driver's actions in every scenario. But would we want an autonomous car to make this decision?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by Anonymous Coward on Monday November 29 2021, @03:07PM (1 child)

    by Anonymous Coward on Monday November 29 2021, @03:07PM (#1200531)

    Indeed!

    "By entering this vehicle you indicate that you accept our terms of service [link to TOS TLDR here]...."

    --
    Then the obvious solution to the trolley problem is to allow the vehicle's passengers to die in the accident, since - by entering the vehicle - they have agreed to accept the chance of being killed by its operation. The pedestrians etc outside the vehicle have _not_ agreed to any contract governing the operation of the vehicle, and hence may not be killed by the AI's operation.

    Starting Score:    0  points
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 2) by DannyB on Monday November 29 2021, @05:22PM

    by DannyB (5839) Subscriber Badge on Monday November 29 2021, @05:22PM (#1200590) Journal

    By entering this vehicle, you accept the EULA, which you will be provided at the end of the ride.

    --
    While Republicans can get over Trump's sexual assaults, affairs, and vulgarity; they cannot get over Obama being black.