Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by cmn32480 on Monday March 27 2017, @01:51PM   Printer-friendly
from the it-is-everybody-else-you-have-to-watch-out-for dept.

More bad news for Uber: one of the ride-hailing giant's self-driving Volvo SUVs has been involved in a crash in Arizona — apparently leaving the vehicle flipped onto its side, and with damage to at least two other human-driven cars in the vicinity.

The aftermath of the accident is pictured in photos and a video posted to Twitter by a user of @FrescoNews, a service for selling content to news outlets. According to the company's tweets, the collision happened in Tempe, Arizona, and no injuries have yet been reported.

Uber has also confirmed the accident and the veracity of the photos to Bloomberg. We've reached out to the company with questions and will update this story with any response. Update: Uber has now provided us with the following statement: "We are continuing to look into this incident and can confirm we had no backseat passengers in the vehicle."

TechCrunch understands Uber's self-driving fleet in Arizona has been grounded, following the incident, while an investigation is undertaken. The company has confirmed the vehicle involved in the incident was in self-driving mode. We're told no one was seriously injured.

Local newspaper reports suggest another car failed to yield to Uber's SUV, hitting it and resulting in the autonomous vehicle flipping onto its side. Presumably the Uber driver was unable to take over the controls in time to prevent the accident.

Source: TechCrunch


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by Justin Case on Monday March 27 2017, @02:17PM (50 children)

    by Justin Case (4239) on Monday March 27 2017, @02:17PM (#484614) Journal

    If this were a common collision between two human drivers, we would conclude it was the other driver's fault for failing to yield.

    But we may never know if a human driver in full control of the Uber would have been able to observe "that guy is an asshole who isn't paying attention" and practice defensive driving to avoid getting hit.

    The Uber, you see, expected other drivers would play by the rules, and all you need is one cheater to ruin it for everyone.

    Making snap assessments of the personality and competence of other drivers is part of road survival. True, humans don't always get it right. But you can't convince me that computers understand driving at that level. A self driving car is still just an algorithm: a GPS in control of a steering wheel. What could possibly go wrong?

    • (Score: 3, Informative) by sjames on Monday March 27 2017, @02:38PM (6 children)

      by sjames (2882) on Monday March 27 2017, @02:38PM (#484620) Journal

      The Uber car had a human driver on board. He wasn't able to avoid the collision.

      • (Score: 4, Insightful) by Justin Case on Monday March 27 2017, @02:55PM (5 children)

        by Justin Case (4239) on Monday March 27 2017, @02:55PM (#484626) Journal

        He wasn't driving.

        • (Score: 2) by weeds on Monday March 27 2017, @03:10PM (1 child)

          by weeds (611) on Monday March 27 2017, @03:10PM (#484636) Journal

          Indeed, but it's not like he was asleep in the back seat. It seems he was behind the wheel and could not react in time. Now not having your hands on the wheel does take some time to react and maybe he isn't as engaged in the driving as you would be if you were actually driving. I have to agree from my experience (not a statistic, just my experience) that it is possible to determine that some other driver is a dingbat and likely to cause an accident.
          As I drive down mainstreet in very icy conditions, each time I come up to a light and carefully apply the brakes with a gentle pumping action in plenty of time to stop (yea, this was before those fancy antilock systems) I watch the hoser next to me slam on the brakes and slide half way into the intersection three times. I lost track of him and came to the next light to stop when I was slammed from behind. Destroyed the car, sent glass flying everywhere, and pushed me through the intersection. Guess who it was?
          I can imagine it would take a lot of computer power to assess the actions of all of the nearby cars and determine who was not a really good driver. I didn't give that guy a high enough priority myself. If all the cars were self driving, we could agree upon a set of rules and then only have accidents when failures happened.

          • (Score: 2) by sjames on Monday March 27 2017, @07:52PM

            by sjames (2882) on Monday March 27 2017, @07:52PM (#484826) Journal

            Yeah, there are times when you can recognize a bad driver and hopefully avoid them, but in other cases (like this one), where someone simply fails to yield, you don't get a chance to size up their driving before the accident.

        • (Score: 5, Insightful) by AthanasiusKircher on Monday March 27 2017, @03:14PM (2 children)

          by AthanasiusKircher (5291) on Monday March 27 2017, @03:14PM (#484641) Journal

          Exactly.

          I feel like one of the biggest misconceptions about self-driving cars is that it's reasonable for them to be "partially autonomous," with the expectation that some sort of alarm could ring in a bad situation and the human driver could take over to handle things. To my mind, that's why people seem disturbed when propositions for autonomous cars without steering wheels, etc. are mentioned... everyone thinks, "Well, **I** could take over and avoid an accident."

          But it's just not reasonable to expect humans to take over in a split-second and avoid an accident. Google's drivers (from what I understand) have had lots of practice and basically know the kinds of situations where it's useful to disengage the self-driving mechanism in advance (e.g., unusual traffic patterns with ad hoc changes like novel constructions zones, police directing traffic, bad weather, etc.). If you look at Google's reports, you'll see that the supposed "self-driving" cars actually spend rough of 1/3 of their mileage in manual mode with a human driver.

          This may or may not have anything to do with this specific situation, but the reality is that expecting a human driver -- even one tasked with paying attention -- to take over controls suddenly and intervene to prevent a split-second collision is just not feasible in many situations.

          • (Score: 2) by Spamalope on Monday March 27 2017, @03:40PM

            by Spamalope (5233) on Monday March 27 2017, @03:40PM (#484653) Homepage

            "partially autonomous" is reasonable, but a bit differently. Fully able to drive on a well lit freeway during the day, only capable of providing driver aids (lane change warnings, assisted braking, collision warnings etc) raining at night on a country lane, with full driving areas increasing each generation is reasonable if the tech becomes safe for some conditions first.

            Automated parking would be spectacular too. Robo Valet is perfect! It'd be wonderful especially in the rain.

          • (Score: 4, Interesting) by bradley13 on Monday March 27 2017, @05:17PM

            by bradley13 (3053) on Monday March 27 2017, @05:17PM (#484714) Homepage Journal

            I'd just like to add that driver attention span is a problem. If you are actually driving, you are involved in the process, and paying attention is easy. If you are not driving, but just observing, it is psychologically much more difficult to keep your attention focused on the (entirely potential) task. This is well-known, for example, from airline pilots and autopilot - aircraft designers know not to expect split-second reactions from pilots.

            --
            Everyone is somebody else's weirdo.
    • (Score: 0) by Anonymous Coward on Monday March 27 2017, @03:02PM (1 child)

      by Anonymous Coward on Monday March 27 2017, @03:02PM (#484630)

      If I were a Master of the Universe, a two-pronged strategy would be best to take down Uber. First there is the sexual harassment stuff, which works wonders. Now that Uber's been softened up by that, which is guaranteed to mute the response of misogynerds who might point out the flaws with the next part, here's the second prong. Hire desperate people to cause accidents with Uber self-driving vehicles.

      Why would a misogynerd defend Uber? Is he also a sexually harassing rape monster? No? Well, he'd better not point out the obvious flaw lest we conclude that he's pro-rape and in defending known sexual harassers.

      If you're a Master of the Universe, you can just grab women by the pussy. You misogynerds, though! Ha. Why would a woman ever want to be with one of you dorks? She'd rather I were grabbing her pussy.

      I might be a leader at Ford or Google or any of the other major companies working on this technology. I might even be a taxi company kingpin. Human lives mean nothing to me. Getting to market with self-driving cars first so that I can lock customers into my company is what I care about.

      • (Score: 3, Informative) by DannyB on Monday March 27 2017, @04:29PM

        by DannyB (5839) Subscriber Badge on Monday March 27 2017, @04:29PM (#484677) Journal

        There is possibly a third prong to your two pronged tragedy to take down Uber. Google's lawsuit over stolen self crashing car tech.

        --
        When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 3, Insightful) by Rivenaleem on Monday March 27 2017, @03:10PM (23 children)

      by Rivenaleem (3400) on Monday March 27 2017, @03:10PM (#484635)

      The other car was piloted by a 100% human driver who was unable to avoid the collision, in fact, may have been the cause of the collision in the first place! Had the other car been computer-controlled, would the accident have occurred?

      You are completely correct, all you need is one cheater to ruin it for everyone. The obvious solution is to remove the cheaters from the equation. The sooner 100% of cars are computer controlled, the sooner we see a drop in collisions caused by human error.

      The only reason we need defensive driving is because there's people out there who shouldn't be allowed to drive. Can you imagine a world where every vehicle is autonomous? Do you fully comprehend just how fast they will be capable of driving? Speed limits will be a thing of the past, limited only by the A) quality of the vehicle and road surface, and B) the desire for the occupant to have a comfortable ride.

      I'd much sooner trust a validated algorithm in control of a steering wheel than 90% of human road users.

      • (Score: 3, Insightful) by AthanasiusKircher on Monday March 27 2017, @03:23PM (1 child)

        by AthanasiusKircher (5291) on Monday March 27 2017, @03:23PM (#484644) Journal

        I'd much sooner trust a validated algorithm in control of a steering wheel than 90% of human road users.

        Yes, but the problem is the accuracy of the algorithm. Computers can do really well within expected parameters, which is kind of the point here. But even without the hazards of other drivers, roads can still present all sorts of other hazards that aren't "well-behaved" -- pedestrians, cyclists, children, animals, random debris, weather-related hazards, unexpected shifts due to construction zones, disabled vehicles, failure in traffic signals...

        Etc., etc., etc., etc., etc., etc. (Yes, it really needs THAT many et ceteras.)

        That's my question about AIs for autonomous vehicles. I have no doubt that computers already can do better than the average human on a stretch of clear highway, because computers don't get bored and stop paying attention during "boring driving conditions." I also think AI cars likely also would significantly improve conditions in heavy traffic by behaving more rationally and consistently. The issue, however, is when things get "interesting," in that 1% of time on the road.

        Having a computer that doesn't get bored or drunk or even nod off or whatever will undoubtedly prevent many accidents. But those ideal driving conditions aren't the situations where human judgment is generally useful.

        • (Score: 2) by darkfeline on Tuesday March 28 2017, @03:17AM

          by darkfeline (1030) on Tuesday March 28 2017, @03:17AM (#485015) Homepage

          >Computers can do really well within expected parameters

          Humans do really well within expected parameters. Outside of those parameters, not so much. I'm confident that:

          1. There exist less than ideal situations where a computer drives better than a human.
          2. There exist less than ideal situations where a human drives better than a computer.
          3. 1 vastly outnumbers 2

          Sure, in some contrived situation that may only have happened due to a sequence of errors on the driver's part to begin with, a human driver could avoid a collision that a computer could not, but if that human then goes on to get into one collision per year on average (or whatever the current rate is), I'd rather have the computer crash the car just this one time, thanks.

          --
          Join the SDF Public Access UNIX System today!
      • (Score: 5, Insightful) by Justin Case on Monday March 27 2017, @03:26PM (19 children)

        by Justin Case (4239) on Monday March 27 2017, @03:26PM (#484648) Journal

        The obvious impossible solution is to remove the cheaters from the equation.

        Yeah and if everyone on the Internet would just behave, it would be a safe place too.

        Even if we had a world of 100% self driving cars (we won't) if self driving cars don't crash it will be the first software ever that doesn't.

        Nobody knows how to write reliable software. It is an unsolved problem. Hell it is hard enough verifying whether the software you are getting from someone else is trustworthy: do you trust the writer, did they really write it, has anyone tampered with it since?

        I'm not happy to trust careless market-share and deadline-driven software development teams with my life. Yes you may say I am forced to. I'm still not happy about it.

        • (Score: 2) by DannyB on Monday March 27 2017, @04:12PM (6 children)

          by DannyB (5839) Subscriber Badge on Monday March 27 2017, @04:12PM (#484668) Journal

          I think we will have 100% self driving cars eventually. There was probably a time when people said we would never have 100% of these new fangled auto mobile thingies. After all, autos are unreliable. Difficult to start. You can even break your arm if it backfires while you are cranking. They are noisy. Smelly. And worst of all, they frighten the horses. Nope, automobiles are not the future.

          As for reliable software, I think self driving software might use practices more like avionics software. People routinely trust their lives to that. How many times have you been on a plane where software did the landing rather than the pilot. Pilots are even losing skill due to automation.

          Back in the 90's I read an article about the development of the space shuttle flight software. It was a team of ordinary middle aged folks. No young rock stars. Everything was carefully planned. It was planned right down to what lines of code you were going to write when you actually got to the part of writing the code. Their process was quite different than deadline-driven lowest possible cost just-ship-it software development. They made only one product, the shuttle flight software. It had only 17 bugs in the entire life of the program (at that point when article was written). When you calculated the cost per line of code, it works out to $35,000 per line of code. I don't think most web development shops can afford that. But when your goal is life and death reliability, your process is different. The software only has to be written once. Not once per vehicle.

          --
          When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
          • (Score: 2) by bob_super on Monday March 27 2017, @04:37PM (2 children)

            by bob_super (1357) on Monday March 27 2017, @04:37PM (#484686)

            But... Gubmint Bad Inefficient Wasteful!
            Let them crash and the market will short the survivors!

            It actually cracks me up (sadly) that Uber, and many similar Silicon Valley shops, treat regulation with such contempt in the rush for profit, that they become a poster child for the need for regulation. Evil Anti-regulation Libruls!

            • (Score: 0) by Anonymous Coward on Monday March 27 2017, @05:20PM

              by Anonymous Coward on Monday March 27 2017, @05:20PM (#484718)

              Heh yup! But they're so smart, no need to regulate people who totally know what they're doing ;)

              This coming from a west coast liberal.

            • (Score: 2) by bootsy on Monday March 27 2017, @05:36PM

              by bootsy (3440) on Monday March 27 2017, @05:36PM (#484728)

              A lot of these tech firms are really just finding ways to avoid existing laws and regulation. Uber pretends not to be a taxi service and also avoid employment laws.
              AirBnB avoids rental laws and hotel laws.
              Amazon in the US did well as you can order out of state and avoid GST. Although other business use this as well, Amazon have done well out of it.

          • (Score: 2) by AthanasiusKircher on Monday March 27 2017, @04:43PM (2 children)

            by AthanasiusKircher (5291) on Monday March 27 2017, @04:43PM (#484687) Journal

            I think we will have 100% self driving cars eventually. There was probably a time when people said we would never have 100% of these new fangled auto mobile thingies.

            Except we DON'T have 100% of those "new fangled automobile thingies." It's still legal to walk, ride bicycles, scooters, etc. on most public roads (generally excepting only major highways). It's frequently still legal to ride horses or use horse-pulled buggies, carriages, etc. on many public roads, particularly in rural areas.

            The OP in this thread was suggesting something that sounded more like actively banning human-driven vehicles from public roads. Maybe there will be significant attrition from human-driven vehicles over time, but it's not going to happen overnight. I expect it will be at least a couple decades while autonomous cars will have to share space with human-driven vehicles, and in that long transition, AI cars will need to deal with bad human drivers as a matter of course.

            The alternative would be to create AI-only roads, like we have highways that prohibit pedestrians, bicycles, horses, etc. That would be expensive. But those highways were generally built as new alternatives for faster transport, not created by banning horses on existing roads.

            So your historical analogy seems to fail a bit -- this is a different situation if we really want to segregate AI cars on public roads.

            • (Score: 2) by AthanasiusKircher on Monday March 27 2017, @05:04PM (1 child)

              by AthanasiusKircher (5291) on Monday March 27 2017, @05:04PM (#484709) Journal

              By the way, one thing that might help in this transition would be more clearly marked AI vs. human-driven vehicles... somewhat akin to the rules for horses, e.g., buggies generally need to have markings indicating a slow-moving vehicle, or my personal favorite -- this New Jersey statute [uvm.edu]:

              No person shall drive a horse attached to a sleigh or sled on a highway unless there are a sufficient number of bells attached to the horse's harness to give warning of its approach.

              Which raises the question: How many bells are "sufficient"?

              Anyhow, I expect within a decade that human drivers will be similarly distinguished in some manner to AI vehicles, so they know to "LOOK OUT!" when around them.

              • (Score: 1, Touché) by Anonymous Coward on Tuesday March 28 2017, @01:37AM

                by Anonymous Coward on Tuesday March 28 2017, @01:37AM (#484989)

                Warning: insufficient bells detected. Needs more cowbell.

        • (Score: 3, Funny) by DannyB on Monday March 27 2017, @04:19PM

          by DannyB (5839) Subscriber Badge on Monday March 27 2017, @04:19PM (#484672) Journal

          Oh, one more thing. To have a safe internet, get rid of human participants. 100% chat bots will make the internet safe.

          --
          When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
        • (Score: 0) by Anonymous Coward on Monday March 27 2017, @04:23PM

          by Anonymous Coward on Monday March 27 2017, @04:23PM (#484674)

          Just wait til a company offers 5 bucks if you turn off a needed safety feature so your car can give them a nice benefit and you a more easily hacked car.

        • (Score: 5, Informative) by Arik on Monday March 27 2017, @04:31PM (9 children)

          by Arik (4543) on Monday March 27 2017, @04:31PM (#484679) Journal
          "Nobody knows how to write reliable software. It is an unsolved problem."

          Eh, the basics are definitely known, the problem is the market rarely appreciates or develops that sort of development, so it gets less funding less press less exposure etc. The reason that we've become accustomed to the idea that software is buggy is because the market is oriented towards producing something marketable quickly, and then patching bugs as necessary later. This seems to be mostly for social, rather than technical or even properly economic, factors, but that's another subject really.

          The fact is that we *do* know how to make highly reliable and secure software - start by defining specifications, functions and structures very very carefully, get as much technical criticism as you can at every point, especially at the design stage, lock marketing out of the room, use something like ADA rather than whatever is currently fashionable, and test the fsck out of it at every stage. It's not impossible it's just slow and expensive and in danger of becoming a lost art.

          There are far deeper issues with the autonomous car idea than simply writing reliable software however. An autonomous car needs to be able to do some things that would appear to be impossible in this universe, no matter how reliable their software could be made. But, again, another subject really.

          --
          If laughter is the best medicine, who are the best doctors?
          • (Score: 2) by AthanasiusKircher on Monday March 27 2017, @04:53PM (7 children)

            by AthanasiusKircher (5291) on Monday March 27 2017, @04:53PM (#484701) Journal

            The reason that we've become accustomed to the idea that software is buggy is because the market is oriented towards producing something marketable quickly, and then patching bugs as necessary later. This seems to be mostly for social, rather than technical or even properly economic, factors, but that's another subject really.

            I'm not really sure what you mean by "social factors" as opposed to "properly economic factors," but my impression is that a lot of these decisions are driven by profit calculations. The longer you wait to release software, the longer it takes for it to start making money. If you have competitors (or potential competitors), that's a longer time for them to make money and cement market share (or perhaps get a competing product or new features out).

            The balancing act always seems to be -- how soon can we release our product to start making money off of it without it being SO buggy as to annoy customers? If you have a huge market share, you seem to stop caring about annoying customers as much and just go for maximum profit (see Microsoft).

            So, it seems it's mostly about economics, no?

            • (Score: 3, Informative) by Scruffy Beard 2 on Monday March 27 2017, @05:53PM (5 children)

              by Scruffy Beard 2 (6030) on Monday March 27 2017, @05:53PM (#484741)

              I suspect that software is now complex enough that carefully designed, and validated software will get to market faster than software developed ah-hoc.

              The reason is abstraction leakage. Nobody knows modern systems top-to-bottom. Coding to an interface is hard enough, but when you run into undefined behaviour because you did something the original programmer did not expect: suddenly you are chasing rabbits.

              • (Score: 3, Insightful) by Justin Case on Monday March 27 2017, @06:47PM (4 children)

                by Justin Case (4239) on Monday March 27 2017, @06:47PM (#484773) Journal

                Nobody knows modern systems top-to-bottom.

                Another reason a "modern system" cannot be reliable. You can't even predict what it is going to do under all permutations of inputs.

                • (Score: 2) by Arik on Monday March 27 2017, @07:12PM (3 children)

                  by Arik (4543) on Monday March 27 2017, @07:12PM (#484794) Journal
                  "Another reason a "modern system" cannot be reliable."

                  So just to be clear, are you saying z/OS isn't "modern" or isn't "reliable?"
                  --
                  If laughter is the best medicine, who are the best doctors?
                  • (Score: 2) by Justin Case on Monday March 27 2017, @07:30PM

                    by Justin Case (4239) on Monday March 27 2017, @07:30PM (#484815) Journal

                    From Wikipedia [wikipedia.org]:

                    IBM releases ... corrections ... (a.k.a. PTFs) for z/OS.... IBM labels critical PTFs as "HIPER" (High Impact PERvasive). IBM also "rolls up" multiple patches into a Recommended Service Update (RSU).

                    So yeah, maybe like Microsoft, today's version is perfect, but all those previous versions apparently were not reliable.

                  • (Score: 2) by Justin Case on Tuesday March 28 2017, @12:08AM (1 child)

                    by Justin Case (4239) on Tuesday March 28 2017, @12:08AM (#484942) Journal

                    Really Arik? You made me a foe because I looked up your precious z/OS in Wikipedia, and discovered it wasn't perfect after all?

                    • (Score: 2, Funny) by Arik on Tuesday March 28 2017, @12:42AM

                      by Arik (4543) on Tuesday March 28 2017, @12:42AM (#484961) Journal
                      I don't appreciate your trolling, I was trying to have a serious conversation.
                      --
                      If laughter is the best medicine, who are the best doctors?
            • (Score: 2) by Arik on Monday March 27 2017, @05:53PM

              by Arik (4543) on Monday March 27 2017, @05:53PM (#484742) Journal
              I see the logic and it makes sense on the surface.

              However in case after case I have seen this backfire. A properly economic evaluation would, I strongly suspect, result in a different balance - yes some things are obviously going to be done cheap and dirty but not so much, not everything which seems to be where we're going.

              So I suspect there are social factors confounding it. The decision makers don't tend to be technical, and I suspect as others have suggested that there is a systematic bias among them for paying technical types 'too much.' Even when their productivity justifies it economically, there's psychological opposition to it. Most suits would rather hire 10 "coders" at $10/hour than 2 coders at $50/hour, it just instantaneously strikes them as the obvious best decision, and it's really swimming upstream to convince them to even look at the other possibility. They think of us as commodities, that's the 'management' mindset that's drilled into them in school, and that creates a prejudice that distorts the market.
              --
              If laughter is the best medicine, who are the best doctors?
          • (Score: 2) by Justin Case on Monday March 27 2017, @06:43PM

            by Justin Case (4239) on Monday March 27 2017, @06:43PM (#484770) Journal

            we *do* know how to make highly reliable and secure software - start by defining specifications

            How do you know that the specifications are correct?

      • (Score: 0) by Anonymous Coward on Tuesday March 28 2017, @08:50PM

        by Anonymous Coward on Tuesday March 28 2017, @08:50PM (#485467)

        > Can you imagine a world where every vehicle is autonomous?

        It's easier to imagine a world in which many vehicles are centrally monitored.

    • (Score: 1, Interesting) by Anonymous Coward on Monday March 27 2017, @03:18PM (8 children)

      by Anonymous Coward on Monday March 27 2017, @03:18PM (#484643)

      > The Uber, you see, expected other drivers would play by the rules, ...

      Are you certain about this? I have no inside information about Uber, but if I was designing self-driving software, I would certainly scan cross street traffic before entering intersections, even when the light was green for me.

      I was able to try a demo of the Mercedes driver assist (hands-off the wheel for up to ~5 seconds) at a major car show. It was constantly checking for threats (and pedestrians) that looked like they might enter my lane from the side. They set up a full sized S-class car parked behind a semi-circular screen. The active suspension on the car was actuated to give a sense of motion to the driving simulator and there were multiple scenarios that demoed different aspects of the software.

      • (Score: 2) by Justin Case on Monday March 27 2017, @03:34PM (7 children)

        by Justin Case (4239) on Monday March 27 2017, @03:34PM (#484650) Journal

        if I was designing self-driving software, I would certainly scan cross street traffic

        I think we have a pretty clear example of how that worked out in the present situation.

        But imagine that SDCs were programmed to yield in all circumstances. Humans would soon learn that and come to expect that they could run red lights with impunity, because the SDC will flinch.

        Like it or not, driving is an exercise in betting my life against yours. We both put something on the table when we decide whether to obey traffic laws or not. It is like the game of chicken played out thousands of times every second. Most of the time people know the downside is harsh and the "win" if any is meager. That keeps them in line, not perfectly, but well enough that we still brave the highways.

        A computer has no fear or aggression. How can it ever share the road with people who do?

        • (Score: 2) by DannyB on Monday March 27 2017, @04:01PM (6 children)

          by DannyB (5839) Subscriber Badge on Monday March 27 2017, @04:01PM (#484665) Journal

          Humans would soon learn that and come to expect that they could run red lights with impunity, because the SDC will flinch.

          Idea: self driving cars should recognize other cars' serious traffic violations and automatically report them, along with supporting telemetry, to the local police and news media.

          Drat! I shouldn't have said that here. Now I can't patent it. With rounded corners.

          --
          When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
          • (Score: 2) by Scruffy Beard 2 on Monday March 27 2017, @05:56PM (5 children)

            by Scruffy Beard 2 (6030) on Monday March 27 2017, @05:56PM (#484744)

            And people would never obscure their license plates: that would be illegal!

            • (Score: 2) by DannyB on Monday March 27 2017, @07:12PM (4 children)

              by DannyB (5839) Subscriber Badge on Monday March 27 2017, @07:12PM (#484793) Journal

              If the humans obscured their license plates so that they can continue to drive and deliberately violate traffic rules, especially serious ones, like running red lights, then that provides huge ammunition to accelerate the process of removing all human driven vehicles from public roads.

              The humans who want to continue to have the privilege of manual driving should be looking to be the safest drivers possible. Not as an excuse to run red lights because the self driving cars will flinch even when they have the right of way.

              The more I think this through the more I like the idea of getting rid of any human driven vehicles -- eventually. Just as at one point it seemed improbable that automobiles would replace horse drawn carts because autos were far from perfected technology.

              --
              When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
              • (Score: 2) by Scruffy Beard 2 on Monday March 27 2017, @07:30PM (3 children)

                by Scruffy Beard 2 (6030) on Monday March 27 2017, @07:30PM (#484816)

                My current vehicle (bicycle) is pointless if retrofitted to remove the driver.

                • (Score: 2) by DannyB on Monday March 27 2017, @09:29PM (1 child)

                  by DannyB (5839) Subscriber Badge on Monday March 27 2017, @09:29PM (#484892) Journal

                  In a post self driving car world, things could get a lot better for pedestrians and bicyclists.

                  In a 100% self driving car world we don't need stop signs or traffic signals. Cars going multiple directions could use the intersection at the same time if they can coordinate their efforts.

                  But wait! Pedestrians and bicyclists. First thought: the "walk" button on the crosswalk of traffic signals would be for pedestrians and bicyclists. Otherwise, the traffic lights could be green in multiple directions at once. But what about stop signs? It would be a pain for cars to have to needlessly stop at stop signs.

                  What if crosswalks could recognize the approach of pedestrians / bicycles / strollers / etc, and adjust the traffic of the self driving vehicles accordingly.

                  --
                  When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
                  • (Score: 2) by Scruffy Beard 2 on Monday March 27 2017, @09:36PM

                    by Scruffy Beard 2 (6030) on Monday March 27 2017, @09:36PM (#484898)

                    There is a school of thought that bicycles should emulate (slow moving) vehicles as closely as practical.

                    I have on occasionally treated a red light that fails to trigger as a 4-way stop (inoperative signal); rather than lean over, push the button, and wait up to 124 seconds.

                    (You are only supposed to have to wait up to 120 seconds, but the light cycle is another 4 seconds -- timed it once when sceptical of a sign claiming the wait was only 120s)

                • (Score: 0) by Anonymous Coward on Tuesday March 28 2017, @01:50AM

                  by Anonymous Coward on Tuesday March 28 2017, @01:50AM (#484996)

                  > My current vehicle (bicycle) is pointless if retrofitted to remove the driver.

                  Well, it might count as art? https://figshare.com/articles/Towards_a_maximally-robust_self-balancing_robotic_bicycle_without_reaction-moment_gyroscopes_nor_reaction_wheels/3976596 [figshare.com] Full pdf for download. Describes an autonomous electric bicycle that may eventually roll around the Cornell University campus...just because it can!

    • (Score: 2) by PiMuNu on Monday March 27 2017, @03:39PM (3 children)

      by PiMuNu (3823) on Monday March 27 2017, @03:39PM (#484652)

      The good news is, self-driving cars have excellent telemetry so they can replay the accident and decide whose fault it was. Uber might even be able to publish the telemetry (not sure legal status there) in order to show that it was the other car's fault, which doubtless it was.

      • (Score: 2) by DannyB on Monday March 27 2017, @04:23PM (2 children)

        by DannyB (5839) Subscriber Badge on Monday March 27 2017, @04:23PM (#484675) Journal

        Legal status of publishing telemetry will be no obstacle if a corporation can make profit from it. Just start a new reality tv show: Beware of Human Drivers!

        --
        When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
        • (Score: 2) by bob_super on Monday March 27 2017, @04:30PM (1 child)

          by bob_super (1357) on Monday March 27 2017, @04:30PM (#484678)

          > Just start a new reality tv show: Beware of Human Drivers!

          Youtube's got that cornered, under the heading "Russian Dashcam".

          • (Score: 2) by DannyB on Monday March 27 2017, @04:47PM

            by DannyB (5839) Subscriber Badge on Monday March 27 2017, @04:47PM (#484694) Journal

            They are quite something to watch. But there are far fewer American Dashcam videos. That fact alone says a lot. People invest in the dashcams because they believe they need the evidence.

            Imagine if everyone had a dashcam++ built in to their car.

            Oh, imagine if video were captured by every self driving car, even in a traffic stop. There would be more telemetry cam footage than body worn camera footage. Nevermind why a self driving car would be pulled over. "because you were leaving a known drug area"

            --
            When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 2) by DannyB on Monday March 27 2017, @03:57PM

      by DannyB (5839) Subscriber Badge on Monday March 27 2017, @03:57PM (#484662) Journal

      A self driving car is still just an algorithm: a GPS in control of a steering wheel. What could possibly go wrong?

      It is also statistical classifiers that decide: right now, I think that blob is highly likely to be a pedestrian.

      What happens when a bus stop gets mis classified as a sheltered parking spot.

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 2) by driven on Monday March 27 2017, @04:37PM (1 child)

      by driven (6295) on Monday March 27 2017, @04:37PM (#484684)

      Some scenarios that I doubt an automated car would handle in time:

      - avalanche/mud slide
      - falling rocks
      - emergency landing of a small aircraft on the road

      All of these scenarios are potentially something you can see happening in the distance giving you enough time to react.
      By the time a self-driving car notices, would it be too late?

      How about situations where you have to get through something bad to avoid something worse? Would the car just stop and do nothing?

      Seems that paying attention to the road even with a fully self-driving car is a good idea and that manual controls will always be necessary. At some point, however, I suspect that pressure for cost savings will result in cars without manual controls.

      • (Score: 2, Funny) by Scruffy Beard 2 on Monday March 27 2017, @05:59PM

        by Scruffy Beard 2 (6030) on Monday March 27 2017, @05:59PM (#484746)

        Or maybe all of those classic racing simulations will be correct in that you can drive with the arrow keys :)

    • (Score: 1) by khallow on Monday March 27 2017, @06:52PM

      by khallow (3766) Subscriber Badge on Monday March 27 2017, @06:52PM (#484775) Journal

      The Uber, you see, expected other drivers would play by the rules, and all you need is one cheater to ruin it for everyone.

      And your evidence for this is? While Uber does plenty of things wrong (one of which probably is a very high self-driving error rate compared to a competitor like Google), one can't magically compensate via algorithm or skilled driver for all the possible ways other drivers can kill. Just because there is an accident doesn't mean it's some glaring oversight on the part of Uber.

      Further, "cheating" in this case is its own cure. There's only so much cheating a driver can do before one is dead or without a license and vehicle.

  • (Score: 0) by Anonymous Coward on Monday March 27 2017, @03:44PM (1 child)

    by Anonymous Coward on Monday March 27 2017, @03:44PM (#484654)

    https://techcrunch.com/2017/03/16/uber-recode-leaked-autonomous-vehicle-data/ [techcrunch.com]

    This report on Uber's progress isn't too encouraging. It seems like the different kinds of fails are varying somewhat at random, not coming down as one would hope for software under development --

    Uber’s self driving fleet, spread across Pennsylvania, California and Arizona, is driving more miles than ever, but its vehicles aren’t improving in a steady way on measurements of rider experience. Uber breaks this variable down into a few different data streams: how many miles a car makes it before a human takes over for any reason which it calls “miles per intervention,” how many miles a car goes before a “critical” driver takeover (to avoid harm or damage) and how many miles a car goes before a “bad experience,” a measure of overall ride smoothness that is less focused on safety.

    By the miles per intervention measure, Uber’s fleet isn’t doing so hot. In January, an Uber autonomous vehicle could drive .9 miles before a driver takeover. By February, that number had inched up to one full mile before dropping down again to .71 miles. As of last week it was .8 miles.

    When it comes to measures of critical interventions — the scary, accident-avoiding ones — Uber’s metrics are trending upward, albeit erratically. At the start of February, an autonomous vehicle could make it 125 miles without a critical intervention, but the following week that number dipped down to 50 miles. By the third week in February it shot back up to 160 miles before dipping to 115 again the following week. At the last measure, taken the week of March 8, it was up to 196 miles.

    By measures of “bad experiences” like hard stops and jerky driving, the fleet is getting worse. In mid-January, Uber self-driving cars averaged 4.5 miles before a bad experience, but by the next month that had dropped down to 2 miles, where the number remained into the first week of March.

    While I'm not trained in statistics, this kind of erratic performance looks a lot like the sample is too small, or the sampling period is too short. As has been posted here before, the required mileage to truly "prove out" an AI driver is enormous -- many times greater than combined efforts of all the various companies engaged in this research.

    • (Score: 1, Interesting) by Anonymous Coward on Monday March 27 2017, @04:46PM

      by Anonymous Coward on Monday March 27 2017, @04:46PM (#484692)

      As has been posted here before, the required mileage to truly "prove out" an AI driver is enormous -- many times greater than combined efforts of all the various companies engaged in this research.

      I wonder if they could mitigate that by putting telemetry on a whole bunch of human-driven cars and then just running the data through the AI's trainer. Obviously its not the same as having the AI in control, and you might end up training the AI with drivers' bad habits too. But maybe human-assisted classification to excise the worst instances would help, although that would be a boring AF job unless they figured out how to make it a video game or something.

  • (Score: 2) by DannyB on Monday March 27 2017, @03:53PM (2 children)

    by DannyB (5839) Subscriber Badge on Monday March 27 2017, @03:53PM (#484659) Journal

    Since Uber does not use aircraft, does that mean that nothing has really changed?

    Well, I suppose one thing. Uber vehicles should remain firmly on the ground. No crashing with such force to result in a vehicle flying through the air.

    --
    When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
    • (Score: 2) by tfried on Monday March 27 2017, @06:30PM (1 child)

      by tfried (5534) on Monday March 27 2017, @06:30PM (#484763)

      We're talking about Uber. Should they ever come to down to earth, it'll be big news.

      • (Score: 2) by DannyB on Monday March 27 2017, @07:14PM

        by DannyB (5839) Subscriber Badge on Monday March 27 2017, @07:14PM (#484799) Journal

        Depends on where it comes back down to earth. Uber vehicle crashes with enough force to go airborne. It lands on a crowded bus shelter.

        --
        When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
  • (Score: 3, Informative) by DannyB on Monday March 27 2017, @06:04PM

    by DannyB (5839) Subscriber Badge on Monday March 27 2017, @06:04PM (#484748) Journal

    Uber resumes self-driving car tests following crash [engadget.com]

    The ridesharing firm tells Engadget that it's "resuming our development operations" in San Francisco as of this morning -- you should see test cars back on the streets very shortly. The Arizona and Pittsburgh cars are still idle as of this writing, but they're expected to go back into service soon.

    and . . .

    The company hasn't said exactly why it's resuming so quickly.

    A P5 telepath said that company executives were thinking: Money

    --
    When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
  • (Score: 3, Insightful) by mendax on Monday March 27 2017, @06:57PM (6 children)

    by mendax (2840) on Monday March 27 2017, @06:57PM (#484781)

    I have a CDL although I don't use it. One thing I learned while getting it is that when there is an accident, the commercial driver is considered almost always at fault, even if the truck was where it was supposed to be, even if it wasn't moving. The reason for this is because the commercial driver is held to a higher standard than the other guy. My question is should we not hold the AI to a higher standard in self-driving cars? This means that even though the human driver failed to yield, the AI should have anticipated the situation and avoided it. It's a good question, one I hope the idiot driver who hit the Uber car will ask his insurance company.

    --
    It's really quite a simple choice: Life, Death, or Los Angeles.
    • (Score: 2) by DannyB on Monday March 27 2017, @07:20PM (1 child)

      by DannyB (5839) Subscriber Badge on Monday March 27 2017, @07:20PM (#484805) Journal

      Statistically, self driving cars will be safer than human drivers. Over all. So they already will be at a higher standard. Once they are a reality.

      Thinking that self driving cars should be automatically responsible for an accident is wrong. You can't build self driving cars that can avoid every possible accident that a human driver can deliberately try to cause. It's like asking to auto filter trolls from SN. Blaming the self driving car seems especially wrong when the self driver will probably have a ton of telemetry that proves the human driver at fault. It's like saying, I can prove X, but the the legislators think they know better. Oh, wait.

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
      • (Score: 2) by c0lo on Monday March 27 2017, @11:17PM

        by c0lo (156) Subscriber Badge on Monday March 27 2017, @11:17PM (#484934) Journal

        Statistically, self driving cars will be safer than human drivers.
        ...
        Thinking that self driving cars should be automatically responsible for an accident is wrong.

        Really?
        If this isn't the most obvious "beg the question" example I don't know what is.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 2) by Scruffy Beard 2 on Monday March 27 2017, @09:47PM (3 children)

      by Scruffy Beard 2 (6030) on Monday March 27 2017, @09:47PM (#484901)

      One thing I learned while getting it is that when there is an accident, the commercial driver is considered almost always at fault, even if the truck was where it was supposed to be, even if it wasn't moving. The reason for this is because the commercial driver is held to a higher standard than the other guy.

      Is this actually true, and not an exaggeration of the physical realities when applying "4 wheeler" logic to big trucks?

      For example: if you rear-end somebody, you are automatically considered at fault (for following too close). In a standard passenger car, that makes sense. However, if a 4-wheeler cuts off a full-loaded semi at a stop-light, there is not much the driver of the truck can do.

      • (Score: 2) by Nobuddy on Monday March 27 2017, @10:52PM (1 child)

        by Nobuddy (1626) on Monday March 27 2017, @10:52PM (#484926)

        |if you rear-end somebody, you are automatically considered at fault (for following too close).

        Common misconception. In most states, this is not at all automatic. Most times the one who rear-ended the car in front is held at fault- but that is because they were at fault. If the lead car was determined to be the cause (such as a sudden lane-change and stop), they are cited.

        • (Score: 2) by Arik on Tuesday March 28 2017, @12:52AM

          by Arik (4543) on Tuesday March 28 2017, @12:52AM (#484968) Journal
          No, it's not really a misconception, or no more so than your correction. If you rear-end somebody you are at fault, that's a pretty solid statement, I'm not aware of any exceptions at all to that. It's your responsibility as a driver to keep sufficient following distance and engage your breaks when required to avoid that, and it doesn't matter what the car in front of you does, if you hit it you're at fault in the collision.

          The thing is, it's not necessarily ALL your fault. It's quite possible that the car you hit did something wrong, and dangerous, and you would not have hit them otherwise. YOU ARE STILL AT FAULT. But you might be ruled e.g. 70% at fault rather than 100%. This is normally only important if you are being sued, but it's very very important then. Depending on the state, showing the other driver was ALSO at fault to some degree can either (in a few states) prevent them from suing you for the accident at all or (in more states) limit them to suing only for a portion of the amount they otherwise would. For instance, if there were $10k in damages, and you were 70% at fault, the other driver could only sue for $7k in expenses, rather than $10k.
          --
          If laughter is the best medicine, who are the best doctors?
      • (Score: 2) by mendax on Monday March 27 2017, @11:23PM

        by mendax (2840) on Monday March 27 2017, @11:23PM (#484937)

        Is this actually true, and not an exaggeration of the physical realities when applying "4 wheeler" logic to big trucks?

        This is what I learned while getting my CDL, although it is not on the DMV exam or in the vehicle code. And from my own experience with a pickup truck vs. big rig accident in which I was at least partially at fault, there was no question whose insurance company was going to pay to fix my pickup: the big rig's.

        Incidentally, this does not mean that the big rig driver will get a ticket if he's considered at fault. He has to be factually at fault.

        Another bit of information about truckers. If the cops are called, the trucker is going to be given a drug test ASAP. If there is anything in his blood stream, even if it is under the legal limit, he can be charged with a DUI. There is no tolerance for drugs or alcohol among professional drivers. None at all.

        --
        It's really quite a simple choice: Life, Death, or Los Angeles.
(1)