A police officer is directing traffic in the intersection when he sees a self-driving car barreling toward him and the occupant looking down at his smartphone. The officer gestures for the car to stop, and the self-driving vehicle rolls to a halt behind the crosswalk. "This seems like a pretty plausible interaction. Human drivers are required to pull over when a police officer gestures for them to do so. It’s reasonable to expect that self-driving cars would do the same." But Will Oremus writes that while it's clear that police officers should have some power over the movements of self-driving cars, "what’s less clear is where to draw the line." Should an officer be able to "do the same if he suspects the passenger of a crime? And what if the passenger doesn’t want the car to stop—can she override the command, or does the police officer have ultimate control?"
According to a RAND Corp. report on the future of technology and law enforcement “the dark side to all of the emerging access and interconnectivity is the risk to the public’s civil rights, privacy rights, and security.” It added, “One can readily imagine abuses that might occur if, for example, capabilities to control automated vehicles and the disclosure of detailed personal information about their occupants were not tightly controlled and secured.”
(Score: 2) by mr_mischief on Thursday August 27 2015, @07:29PM
The problem then becomes what happens when some other non-police schmuck in a reflective vest starts directing cars into dark alleys, or into one another. A person driving generally does fairly well telling the difference among a cop, a legitimate construction worker, a crossing guard, some drunk idiot, and a shady crook who is none of those things (although it's much harder to tell when there's shady crook overlap with those other things). The point of the article is how much direct control is given to the schmuck and what are the passengers' options to override that.