[NB: The following article makes reference to oft-cited Trolley problem. Highly recommended.--martyb/Bytram]
Imagine a future with self-driving cars that are fully autonomous. If everything works as intended, the morning commute will be an opportunity to prepare for the day's meetings, catch up on news, or sit back and relax.
But what if things go wrong? The car approaches a traffic light, but suddenly the brakes fail and the computer has to make a split-second decision. It can swerve into a nearby pole and kill the passenger, or keep going and kill the pedestrian ahead.
The computer controlling the car will only have access to limited information collected through car sensors, and will have to make a decision based on this. As dramatic as this may seem, we're only a few years away from potentially facing such dilemmas.
Autonomous cars will generally provide safer driving, but accidents will be inevitable—especially in the foreseeable future, when these cars will be sharing the roads with human drivers and other road users.
Tesla does not yet produce fully autonomous cars, although it plans to. In collision situations, Tesla cars don't automatically operate or deactivate the Automatic Emergency Braking (AEB) system if a human driver is in control.
In other words, the driver's actions are not disrupted—even if they themselves are causing the collision. Instead, if the car detects a potential collision, it sends alerts to the driver to take action.
In "autopilot" mode, however, the car should automatically brake for pedestrians. Some argue if the car can prevent a collision, then there is a moral obligation for it to override the driver's actions in every scenario. But would we want an autonomous car to make this decision?
Links |
printed from SoylentNews, The Self-Driving Trolley Problem: How Will AI Systems Make the Most Ethical Choices for All of Us? on 2024-04-19 10:36:30