Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday January 04 2016, @07:25AM   Printer-friendly
from the how-long-could-they-last-in-Boston? dept.

Link

A little while back, I saw the following tweet:

I can print mostly. My wifi works often. The Xbox usually recognises me. Siri sometimes works. But my self driving car will be *perfect*.

The tweet has since been deleted, so I won't name the author, but it's a thought-provoking idea. At first, I agreed with it. I'm a programmer and know full well just how shoddy is 99.9% of the code we all write. The idea that I would put my life in the hands of a coder like myself is a bit worrying.

[...] The reality is that self-driving cars don't need to be perfect. They just need to be better than the alternative: human-driven cars. And that is a much lower bar, as human beings are remarkably bad at driving.

[...] Self-driving cars don't get tired. They don't get drunk. They don't get distracted by friends or a crying baby. They don't look away from the road to send a text message. They don't speed, tailgate, brake too late, forget to show a blinker, drive too fast in bad weather, run red lights, race other cars at red lights, or miss exits. Self-driving cars aren't going to be perfect, but they will be a hell of a lot better than you and me.

Related: The High-Stakes Race to Rid the World of Human Drivers


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by quadrox on Monday January 04 2016, @08:35AM

    by quadrox (315) on Monday January 04 2016, @08:35AM (#284383)

    Self driving cars don't need to be perfect, but saying they just need to be better than humans doesn't quite cut it either.

    Self driving cars need to fulfil one criteria, and one criteria only: There must never be an incident where it would be reasonable to say that a human driver could and would have avoided the issue - as long as we get to that point everything else is moot.

    It doesn't matter if self driving cars are much much better than humans most of the time, but worse for a few corner cases. Even if they are less accident prone on average and safe human lives, the moment someone can reasonably claim that a human driver would have avoided some accident, that's going to be a huge setback.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=3, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0) by Anonymous Coward on Monday January 04 2016, @08:46AM

    by Anonymous Coward on Monday January 04 2016, @08:46AM (#284386)

    One thing automated cars have trouble with is looking ahead.

    The other week, I got confused by extraneous lines on the road. Instead on veering for the exit (as the Telsa has actually been seen to do). I held my line because I knew that the road continued ahead. (The week prior to that I almost got into trouble for not following a random line: the road under construction did veer to the side.)

    That is one reason you are advised to take the same route every day too: you learn what the route looks like. I am too paranoid for that, but it means I often have to drive slower.

    • (Score: 1, Touché) by Anonymous Coward on Monday January 04 2016, @12:56PM

      by Anonymous Coward on Monday January 04 2016, @12:56PM (#284465)

      That is one reason you are advised to take the same route every day too: you learn what the route looks like.

      That's a double-edged sword. There have been several accidents where the driver later explained "but the train doesn't usually pass at that time".

      Especially things like road signs being changed don't get noticed by people driving the same route every day.

  • (Score: 3, Insightful) by TheLink on Monday January 04 2016, @08:58AM

    by TheLink (332) on Monday January 04 2016, @08:58AM (#284390) Journal
    They need to be much better. Not just a bit better.

    If I'm driving and I screw up - I'm liable. If the car drives itself and it screws up, the car manufacturer is liable. And you may have two parties claiming on the manufacturer - the owner of the self-driving car or his insurer could sue the manufacturer too.

    And there'd probably be people trying to find flaws in your AIs that are legally exploitable - e.g. they drive or walk or decorate/paint their cars/walls in a certain legal way and that causes your self-driving car to make mistakes. Then people sue you.

    So if you were a car manufacturer making self-driving cars, how much better would you want your cars to be before you would take responsibility for their mistakes? Just a bit better than the average human driver? I doubt it.
    • (Score: 4, Funny) by theluggage on Monday January 04 2016, @11:14AM

      by theluggage (1797) on Monday January 04 2016, @11:14AM (#284429)

      e.g. they drive or walk or decorate/paint their cars/walls in a certain legal way and that causes your self-driving car to make mistakes. Then people sue you.

      The prosecuting attorney in the long-running Acme vs. Tesla trial died today during a jury visit to Acme's headquarters. He was pointing out the damage caused to the life-size mural of a road tunnel entrance decorating the building's front wall when a heavy vehicle suddenly emerged from the painting and failed to stop. Attempts to re-inflate the attorney with a tyre pump failed. Police are looking for a large flat-bed truck carrying anvils, weights marked '1000 tons' and grand pianos, possibly driven by a rabbit or a duck.

    • (Score: 2) by VLM on Monday January 04 2016, @12:54PM

      by VLM (445) on Monday January 04 2016, @12:54PM (#284463)

      And there'd probably be people ... legally exploitable ... Then people sue you.

      The closest analogy to self driving cars is general aviation. Maybe even closer would be general aviation autopilots.

      The way it works is some doctor with a severe case of get-home-itis tries to fly thru thru an thunderstorm, and naturally dies. Then the family sues everyone with deep pockets even tangentially related to the crash. Do you mow the lawn at the destination airport that he never reached? Do you got money? You're going to be sued. Then everyone pays out of court settlement and increases their rates. That's why a metal structure that "should cost" $50K ends up costing $1M. Or a crate engine that "should cost" $5K if it was a truck or generator engine ends up selling for $25K.

      The problem of self driving cars is the liability insurance is going to maybe quintuple the cost of the car.

      We don't have a functioning legal system in that very few people understand it and its too expensive to participate for most of us anyway. None the less we do have a legal system, and I assure you, anyone can sue anyone for any reason, then essentially blackmail them for an out of court for less than the cost of a typical legal defense, assuming they're not judgment proof. That, basically, is our legal system.

      A self driving car is legally a non-starter.

      • (Score: 1) by legont on Monday January 04 2016, @06:19PM

        by legont (4179) on Monday January 04 2016, @06:19PM (#284633)

        Actually, commercial aeroplanes are self-fying for awhile already. Depending on an airline policy, the pilot may or may not take control just before a landing; the rest is always autopilot. Russians usually fly their planes, Americans and Europeans sometimes, Asians almost never. Regardless, the full legal responsibility is on the pilots.

        This is the most likely way cars will go. Drivers would still go to prisons for sitting drunk inside fully automated cars.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
    • (Score: 0) by Anonymous Coward on Monday January 04 2016, @08:04PM

      by Anonymous Coward on Monday January 04 2016, @08:04PM (#284688)

      I think being safer than the 90th percentile of human drivers would be adequate. That would make a huge reduction in traffic injuries and fatalities. I'm pretty sure that self driving cars can achieve that level of safety while increasing throughput, and reducing the variance in throughput, at the same time.

      • (Score: 0) by Anonymous Coward on Tuesday January 05 2016, @01:16PM

        by Anonymous Coward on Tuesday January 05 2016, @01:16PM (#285098)

        If you were the car manufacturer would that really be safe enough for you?

        If elevators were merely safer than 90% of all human stair climbers I think elevator manufacturers would be sued and shut down.
        http://www.livescience.com/17504-fatal-nyc-accident-elevators-safer-stairs.html [livescience.com]

        But maybe car manufacturers might study how escalator manufacturers do it...

  • (Score: 5, Insightful) by khchung on Monday January 04 2016, @09:09AM

    by khchung (457) on Monday January 04 2016, @09:09AM (#284394)

    Self driving cars need to fulfil one criteria, and one criteria only: There must never be an incident where it would be reasonable to say that a human driver could and would have avoided the issue - as long as we get to that point everything else is moot.

    Name one other tool that replaced a human that have this property.

    By this criteria, we would still be living in the pre-industrial world. Almost every industrial accident that involved a machine, would not have happened if the machine were a human wielding a tool. (Although another set of accidents might have happened instead, but that was not the criteria above).

    We don't need autonomous cars to be better than humans in *every way*, we just need it to be significantly better *on the average*. E.g., if passengers in autonomous cars have injury rates of only 10% of human driven cars (i.e. 90% reduction), it would be stupid not to use one, even though those 10% might have been better off if it were human driven at the time of accident.

    This is the same non-reason for avoiding vaccines, because you cannot prove everyone getting a vaccine would be better off than not. You can only prove that *on the whole* the population would be better.

    • (Score: 3, Insightful) by Jiro on Monday January 04 2016, @10:03AM

      by Jiro (3176) on Monday January 04 2016, @10:03AM (#284408)

      Asking to name another tool that has this property is meaningless because most tools don't replace the human's ability to make decisions. The closest you get is a tool that does the job with a lot of changes other than replacing the human's decision-making ability, with the loss of human decision ability incidental to the change in the job. And occasionally you get a tool that augment's the human's senses but still requires a human to look at the new information.

      • (Score: 3, Insightful) by khchung on Monday January 04 2016, @03:11PM

        by khchung (457) on Monday January 04 2016, @03:11PM (#284530)

        most tools don't replace the human's ability to make decisions.

        There are plenty of tools that replace your ability to make decisions, you are just so used to it you didn't notice.

        Both the escalator and elevator removed your ability to stop moving in the middle of climbing a flight of stairs. Both will cause accidents that would not have happened if you walked up the stairs instead.

        Meat grinders (or any automated cutting tool) replaced your ability to decide to stop/alter your cut, and there had been plenty of accidents involving lost fingers with those tools.

        The printing press removed a scribes ability to decide what to write in mid-page, and I am sure there had been accidents with the printing press that would not have happened if the pages were hand-copied.

        Email's ability to reply-all/send-to-distribution-list removed your ability to make individual decision on each recipient, and there had been plenty of fiasco due to email mistakenly sent to unintended recipient, which would have been avoided if you have to hand-write the address of each on an envelop.

        Yet humanity accepted these risk for progress, the same would happen to autonomous cars. One big red button on the dashboard that said "Emergency STOP" gave you as much control as your elevator.

        • (Score: 0) by Anonymous Coward on Monday January 04 2016, @05:47PM

          by Anonymous Coward on Monday January 04 2016, @05:47PM (#284615)

          Wow, what a lovely set of examples.

          Escalator goes up, escalator goes down. Escalator goes up, escalator goes down.

          Meat grinder goes on, meat grinder goes off, meat grinder goes on, meat grinder goes off. Let's take a ride in the meat grinder.

          A modern printing press is much more complex, but per your example the human buying or using the printing press already made the decision they want all pages to be identical well in advance.

          E-mail distribution lists? Yet you don't want it to go to a few people on that list? You are using the tool wrong. No decisions there other than a choice to avoid learning how to use it.

          So the most complex thing there is the printing press. A modern one has sensors to monitor ink levels, rotation speeds, paper inputs, paper outputs, piles of sensors to make sure computer controllers parts move where they should, etc. But these are still very narrow and well defined use cases. If a rat climbs in to the system and breaks something, you accept you have already made the decision that everything should stop so you can call a repair man.

          But out on a road all kinds of wild unexpected shit can happen.

          How would a computer handle this: Oh, the car in front is slowing down a bit (detectable). I wonder why (it won't) , Oh, they just ran over something that was on the road (probably not detectable unless the cars are talking together and the one in front send out an alert). Oh, shit! There is a ladder in the middle of the highway! (how fast is your object recognition? do you even have object recognition?). Decision time:! Slam on the breaks while hitting it anyway and come to a complete stop in the middle of a busy highway. Keep going exactly straight and run over the ladder, probably damaging the tire and possibly spinning out of control killing people, Swerve left or right and smash in to another car, or (clever idea) move over just enough so the ladder goes directly under the car and hopefully only damages the undercarriage.

          That is the kind of decision a self-driving car will have to make. Yes, we may have to accept it will default to a more blunt solution (probably coming to a complete stop), but it will be a much more important decision because human lives and large amounts of repair money are on the line.

          Or we will have to re-think what a "car" is and what we should expect from it. After all, a modern printing press is not a robot arm moving around a feather quill.

          • (Score: 3, Interesting) by vux984 on Monday January 04 2016, @09:31PM

            by vux984 (5045) on Monday January 04 2016, @09:31PM (#284748)

            How would a computer handle this

            How would a human handle it?

            They can't see in front of the car in front of them. So there first glimpse of the ladder will be after the other car ran over it and sent it skittering at you. So what does a human do?

            Slam on the brakes? Just ram it full speed? Swerve into other lanes and hope they are empty?

            Move over just enough so the ladder goes directly under car? Because humans are so practiced at lining up to run over moving debris at highspeeds? Seriously... a truck in front of me once sent a rock the size of a melon bouncing along the road at me at highway speeds, I slammed on the brakes, but a 'collision' was inevitable, I tried to line up so that it would at least go under the car, but i still hit it with one of the wheels and blew out a tire anyway. Suffice it to say, succeeding at that would have been more luck than brains. For man or machine.

            Really, the best a human is going to do is slam on the brakes and try to minimize the speed of the impact with the object; and if there is time to check mirrors etc perhaps try and change lanes around it (while still braking in case hitting it is inevitable). The car really isn't going to do any worse than a human here, and may be better equipped to deal with the blowout / skidding post impact if it comes to that.

            If the automated car comes to a safe stop great. That's no worse than a human would do.

            The real challenge is for the automated car BEHIND the big rig that just hit the ladder, blew out, and spun to a stop. Because what is it going to do AFTER it comes to its safe stop? The rig in front of it is sideways, blocking both lanes and its going to be there for an hour or three until a tow truck arrives.

            OR does it and the other automated cars just sit there for hours until the tow truck comes to remove the obstacle in front of it?

            While the human drivers all drove into shoulder / ditch to get around it. Or maybe the police closed one of the oncoming lanes, and is directing traffic trough some service crossings to detour traffic. Of course your automatic car needs to drive in reverse 1/2 mile (on a freeway) to get to the service entrance which it would normally be illegal to even use, where it would cross to an oncoming traffic lane... on a freeway. (Sure its closed... and the police are directing traffic...and its a proper detour route... but can the automated car figure that out though??)

    • (Score: 3, Insightful) by quadrox on Monday January 04 2016, @10:19AM

      by quadrox (315) on Monday January 04 2016, @10:19AM (#284413)

      As another poster pointed out, this requirement is only relevant for tools/machines that are fully automated. Humans like to be in control, as long as there is a human involved in the process people think everything is fine. But if you take away the last bit of human control, people will worry about safety, a lot more than is perhaps reasonable. Therefore the requirement for complete automation of anything is that it never performs worse than a human would. If you can show that, then people will be ok with it.

    • (Score: 0) by Anonymous Coward on Monday January 04 2016, @01:04PM

      by Anonymous Coward on Monday January 04 2016, @01:04PM (#284471)

      As others have mentioned, the machines you are talking about still have humans pressing the buttons (even if only the big red one). Even auto-pilots have real pilots on watch at all times. And unlike the "driver still needs to be able to take over" idea for self driving cars, they have A) several seconds longer to do so, due to traffic separation, and B) two of them, so that they won't both be reading the newspaper at the same time.

      The ones that don't have people pressing the buttons tend to have big fences with locking gates that ensure that the whole thing will shut down if anyone tries to enter the working area.

      That's not going to work with self-driving cars.

  • (Score: 3, Interesting) by wonkey_monkey on Monday January 04 2016, @04:34PM

    by wonkey_monkey (279) on Monday January 04 2016, @04:34PM (#284570) Homepage

    Self driving cars need to fulfil one criteria, and one criteria only: There must never be an incident where it would be reasonable to say that a human driver could and would have avoided the issue - as long as we get to that point everything else is moot.

    I'd take "one person killed by an auto-car doing something a human could have avoided" over "100 people killed by humans doing something stupid."

    --
    systemd is Roko's Basilisk