SoylentNews
SoylentNews is people
https://soylentnews.org/

Title    The Self-Driving Trolley Problem: How Will AI Systems Make the Most Ethical Choices for All of Us?
Date    Monday November 29 2021, @10:10AM
Author    martyb
Topic   
from the how-well-do-people-make-these-decisions? dept.
https://soylentnews.org/article.pl?sid=21/11/28/1912230

upstart writes:

[NB: The following article makes reference to oft-cited Trolley problem. Highly recommended.--martyb/Bytram]

The self-driving trolley problem: How will future AI systems make the most ethical choices for all of us?:

Imagine a future with self-driving cars that are fully autonomous. If everything works as intended, the morning commute will be an opportunity to prepare for the day's meetings, catch up on news, or sit back and relax.

But what if things go wrong? The car approaches a traffic light, but suddenly the brakes fail and the computer has to make a split-second decision. It can swerve into a nearby pole and kill the passenger, or keep going and kill the pedestrian ahead.

The computer controlling the car will only have access to limited information collected through car sensors, and will have to make a decision based on this. As dramatic as this may seem, we're only a few years away from potentially facing such dilemmas.

Autonomous cars will generally provide safer driving, but accidents will be inevitable—especially in the foreseeable future, when these cars will be sharing the roads with human drivers and other road users.

Tesla does not yet produce fully autonomous cars, although it plans to. In collision situations, Tesla cars don't automatically operate or deactivate the Automatic Emergency Braking (AEB) system if a human driver is in control.

In other words, the driver's actions are not disrupted—even if they themselves are causing the collision. Instead, if the car detects a potential collision, it sends alerts to the driver to take action.

In "autopilot" mode, however, the car should automatically brake for pedestrians. Some argue if the car can prevent a collision, then there is a moral obligation for it to override the driver's actions in every scenario. But would we want an autonomous car to make this decision?


Original Submission

Links

  1. "upstart" - https://soylentnews.org/~upstart/
  2. "Trolley problem" - https://en.wikipedia.org/wiki/Trolley_problem
  3. "The self-driving trolley problem: How will future AI systems make the most ethical choices for all of us?" - https://theconversation.com/the-self-driving-trolley-problem-how-will-future-ai-systems-make-the-most-ethical-choices-for-all-of-us-170961
  4. "does not yet produce" - https://techcrunch.com/2021/05/07/tesla-refutes-elon-musks-timeline-on-full-self-driving/#
  5. "car detects a potential collision" - https://www.forbes.com/sites/patricklin/2017/04/05/heres-how-tesla-solves-a-self-driving-crash-dilemma/?sh=1a3225616813
  6. "Original Submission" - https://soylentnews.org/submit.pl?op=viewsub&subid=52573

© Copyright 2024 - SoylentNews, All Rights Reserved

printed from SoylentNews, The Self-Driving Trolley Problem: How Will AI Systems Make the Most Ethical Choices for All of Us? on 2024-04-19 10:36:30