Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday March 27 2017, @01:51PM   Printer-friendly
from the it-is-everybody-else-you-have-to-watch-out-for dept.

More bad news for Uber: one of the ride-hailing giant's self-driving Volvo SUVs has been involved in a crash in Arizona — apparently leaving the vehicle flipped onto its side, and with damage to at least two other human-driven cars in the vicinity.

The aftermath of the accident is pictured in photos and a video posted to Twitter by a user of @FrescoNews, a service for selling content to news outlets. According to the company's tweets, the collision happened in Tempe, Arizona, and no injuries have yet been reported.

Uber has also confirmed the accident and the veracity of the photos to Bloomberg. We've reached out to the company with questions and will update this story with any response. Update: Uber has now provided us with the following statement: "We are continuing to look into this incident and can confirm we had no backseat passengers in the vehicle."

TechCrunch understands Uber's self-driving fleet in Arizona has been grounded, following the incident, while an investigation is undertaken. The company has confirmed the vehicle involved in the incident was in self-driving mode. We're told no one was seriously injured.

Local newspaper reports suggest another car failed to yield to Uber's SUV, hitting it and resulting in the autonomous vehicle flipping onto its side. Presumably the Uber driver was unable to take over the controls in time to prevent the accident.

Source: TechCrunch


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Rivenaleem on Monday March 27 2017, @03:10PM (23 children)

    by Rivenaleem (3400) on Monday March 27 2017, @03:10PM (#484635)

    The other car was piloted by a 100% human driver who was unable to avoid the collision, in fact, may have been the cause of the collision in the first place! Had the other car been computer-controlled, would the accident have occurred?

    You are completely correct, all you need is one cheater to ruin it for everyone. The obvious solution is to remove the cheaters from the equation. The sooner 100% of cars are computer controlled, the sooner we see a drop in collisions caused by human error.

    The only reason we need defensive driving is because there's people out there who shouldn't be allowed to drive. Can you imagine a world where every vehicle is autonomous? Do you fully comprehend just how fast they will be capable of driving? Speed limits will be a thing of the past, limited only by the A) quality of the vehicle and road surface, and B) the desire for the occupant to have a comfortable ride.

    I'd much sooner trust a validated algorithm in control of a steering wheel than 90% of human road users.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Insightful) by AthanasiusKircher on Monday March 27 2017, @03:23PM (1 child)

    by AthanasiusKircher (5291) on Monday March 27 2017, @03:23PM (#484644) Journal

    I'd much sooner trust a validated algorithm in control of a steering wheel than 90% of human road users.

    Yes, but the problem is the accuracy of the algorithm. Computers can do really well within expected parameters, which is kind of the point here. But even without the hazards of other drivers, roads can still present all sorts of other hazards that aren't "well-behaved" -- pedestrians, cyclists, children, animals, random debris, weather-related hazards, unexpected shifts due to construction zones, disabled vehicles, failure in traffic signals...

    Etc., etc., etc., etc., etc., etc. (Yes, it really needs THAT many et ceteras.)

    That's my question about AIs for autonomous vehicles. I have no doubt that computers already can do better than the average human on a stretch of clear highway, because computers don't get bored and stop paying attention during "boring driving conditions." I also think AI cars likely also would significantly improve conditions in heavy traffic by behaving more rationally and consistently. The issue, however, is when things get "interesting," in that 1% of time on the road.

    Having a computer that doesn't get bored or drunk or even nod off or whatever will undoubtedly prevent many accidents. But those ideal driving conditions aren't the situations where human judgment is generally useful.

    • (Score: 2) by darkfeline on Tuesday March 28 2017, @03:17AM

      by darkfeline (1030) on Tuesday March 28 2017, @03:17AM (#485015) Homepage

      >Computers can do really well within expected parameters

      Humans do really well within expected parameters. Outside of those parameters, not so much. I'm confident that:

      1. There exist less than ideal situations where a computer drives better than a human.
      2. There exist less than ideal situations where a human drives better than a computer.
      3. 1 vastly outnumbers 2

      Sure, in some contrived situation that may only have happened due to a sequence of errors on the driver's part to begin with, a human driver could avoid a collision that a computer could not, but if that human then goes on to get into one collision per year on average (or whatever the current rate is), I'd rather have the computer crash the car just this one time, thanks.

      --
      Join the SDF Public Access UNIX System today!
  • (Score: 5, Insightful) by Justin Case on Monday March 27 2017, @03:26PM (19 children)

    by Justin Case (4239) on Monday March 27 2017, @03:26PM (#484648) Journal

    The obvious impossible solution is to remove the cheaters from the equation.

    Yeah and if everyone on the Internet would just behave, it would be a safe place too.

    Even if we had a world of 100% self driving cars (we won't) if self driving cars don't crash it will be the first software ever that doesn't.

    Nobody knows how to write reliable software. It is an unsolved problem. Hell it is hard enough verifying whether the software you are getting from someone else is trustworthy: do you trust the writer, did they really write it, has anyone tampered with it since?

    I'm not happy to trust careless market-share and deadline-driven software development teams with my life. Yes you may say I am forced to. I'm still not happy about it.

    • (Score: 2) by DannyB on Monday March 27 2017, @04:12PM (6 children)

      by DannyB (5839) Subscriber Badge on Monday March 27 2017, @04:12PM (#484668) Journal

      I think we will have 100% self driving cars eventually. There was probably a time when people said we would never have 100% of these new fangled auto mobile thingies. After all, autos are unreliable. Difficult to start. You can even break your arm if it backfires while you are cranking. They are noisy. Smelly. And worst of all, they frighten the horses. Nope, automobiles are not the future.

      As for reliable software, I think self driving software might use practices more like avionics software. People routinely trust their lives to that. How many times have you been on a plane where software did the landing rather than the pilot. Pilots are even losing skill due to automation.

      Back in the 90's I read an article about the development of the space shuttle flight software. It was a team of ordinary middle aged folks. No young rock stars. Everything was carefully planned. It was planned right down to what lines of code you were going to write when you actually got to the part of writing the code. Their process was quite different than deadline-driven lowest possible cost just-ship-it software development. They made only one product, the shuttle flight software. It had only 17 bugs in the entire life of the program (at that point when article was written). When you calculated the cost per line of code, it works out to $35,000 per line of code. I don't think most web development shops can afford that. But when your goal is life and death reliability, your process is different. The software only has to be written once. Not once per vehicle.

      --
      The lower I set my standards the more accomplishments I have.
      • (Score: 2) by bob_super on Monday March 27 2017, @04:37PM (2 children)

        by bob_super (1357) on Monday March 27 2017, @04:37PM (#484686)

        But... Gubmint Bad Inefficient Wasteful!
        Let them crash and the market will short the survivors!

        It actually cracks me up (sadly) that Uber, and many similar Silicon Valley shops, treat regulation with such contempt in the rush for profit, that they become a poster child for the need for regulation. Evil Anti-regulation Libruls!

        • (Score: 0) by Anonymous Coward on Monday March 27 2017, @05:20PM

          by Anonymous Coward on Monday March 27 2017, @05:20PM (#484718)

          Heh yup! But they're so smart, no need to regulate people who totally know what they're doing ;)

          This coming from a west coast liberal.

        • (Score: 2) by bootsy on Monday March 27 2017, @05:36PM

          by bootsy (3440) on Monday March 27 2017, @05:36PM (#484728)

          A lot of these tech firms are really just finding ways to avoid existing laws and regulation. Uber pretends not to be a taxi service and also avoid employment laws.
          AirBnB avoids rental laws and hotel laws.
          Amazon in the US did well as you can order out of state and avoid GST. Although other business use this as well, Amazon have done well out of it.

      • (Score: 2) by AthanasiusKircher on Monday March 27 2017, @04:43PM (2 children)

        by AthanasiusKircher (5291) on Monday March 27 2017, @04:43PM (#484687) Journal

        I think we will have 100% self driving cars eventually. There was probably a time when people said we would never have 100% of these new fangled auto mobile thingies.

        Except we DON'T have 100% of those "new fangled automobile thingies." It's still legal to walk, ride bicycles, scooters, etc. on most public roads (generally excepting only major highways). It's frequently still legal to ride horses or use horse-pulled buggies, carriages, etc. on many public roads, particularly in rural areas.

        The OP in this thread was suggesting something that sounded more like actively banning human-driven vehicles from public roads. Maybe there will be significant attrition from human-driven vehicles over time, but it's not going to happen overnight. I expect it will be at least a couple decades while autonomous cars will have to share space with human-driven vehicles, and in that long transition, AI cars will need to deal with bad human drivers as a matter of course.

        The alternative would be to create AI-only roads, like we have highways that prohibit pedestrians, bicycles, horses, etc. That would be expensive. But those highways were generally built as new alternatives for faster transport, not created by banning horses on existing roads.

        So your historical analogy seems to fail a bit -- this is a different situation if we really want to segregate AI cars on public roads.

        • (Score: 2) by AthanasiusKircher on Monday March 27 2017, @05:04PM (1 child)

          by AthanasiusKircher (5291) on Monday March 27 2017, @05:04PM (#484709) Journal

          By the way, one thing that might help in this transition would be more clearly marked AI vs. human-driven vehicles... somewhat akin to the rules for horses, e.g., buggies generally need to have markings indicating a slow-moving vehicle, or my personal favorite -- this New Jersey statute [uvm.edu]:

          No person shall drive a horse attached to a sleigh or sled on a highway unless there are a sufficient number of bells attached to the horse's harness to give warning of its approach.

          Which raises the question: How many bells are "sufficient"?

          Anyhow, I expect within a decade that human drivers will be similarly distinguished in some manner to AI vehicles, so they know to "LOOK OUT!" when around them.

          • (Score: 1, Touché) by Anonymous Coward on Tuesday March 28 2017, @01:37AM

            by Anonymous Coward on Tuesday March 28 2017, @01:37AM (#484989)

            Warning: insufficient bells detected. Needs more cowbell.

    • (Score: 3, Funny) by DannyB on Monday March 27 2017, @04:19PM

      by DannyB (5839) Subscriber Badge on Monday March 27 2017, @04:19PM (#484672) Journal

      Oh, one more thing. To have a safe internet, get rid of human participants. 100% chat bots will make the internet safe.

      --
      The lower I set my standards the more accomplishments I have.
    • (Score: 0) by Anonymous Coward on Monday March 27 2017, @04:23PM

      by Anonymous Coward on Monday March 27 2017, @04:23PM (#484674)

      Just wait til a company offers 5 bucks if you turn off a needed safety feature so your car can give them a nice benefit and you a more easily hacked car.

    • (Score: 5, Informative) by Arik on Monday March 27 2017, @04:31PM (9 children)

      by Arik (4543) on Monday March 27 2017, @04:31PM (#484679) Journal
      "Nobody knows how to write reliable software. It is an unsolved problem."

      Eh, the basics are definitely known, the problem is the market rarely appreciates or develops that sort of development, so it gets less funding less press less exposure etc. The reason that we've become accustomed to the idea that software is buggy is because the market is oriented towards producing something marketable quickly, and then patching bugs as necessary later. This seems to be mostly for social, rather than technical or even properly economic, factors, but that's another subject really.

      The fact is that we *do* know how to make highly reliable and secure software - start by defining specifications, functions and structures very very carefully, get as much technical criticism as you can at every point, especially at the design stage, lock marketing out of the room, use something like ADA rather than whatever is currently fashionable, and test the fsck out of it at every stage. It's not impossible it's just slow and expensive and in danger of becoming a lost art.

      There are far deeper issues with the autonomous car idea than simply writing reliable software however. An autonomous car needs to be able to do some things that would appear to be impossible in this universe, no matter how reliable their software could be made. But, again, another subject really.

      --
      If laughter is the best medicine, who are the best doctors?
      • (Score: 2) by AthanasiusKircher on Monday March 27 2017, @04:53PM (7 children)

        by AthanasiusKircher (5291) on Monday March 27 2017, @04:53PM (#484701) Journal

        The reason that we've become accustomed to the idea that software is buggy is because the market is oriented towards producing something marketable quickly, and then patching bugs as necessary later. This seems to be mostly for social, rather than technical or even properly economic, factors, but that's another subject really.

        I'm not really sure what you mean by "social factors" as opposed to "properly economic factors," but my impression is that a lot of these decisions are driven by profit calculations. The longer you wait to release software, the longer it takes for it to start making money. If you have competitors (or potential competitors), that's a longer time for them to make money and cement market share (or perhaps get a competing product or new features out).

        The balancing act always seems to be -- how soon can we release our product to start making money off of it without it being SO buggy as to annoy customers? If you have a huge market share, you seem to stop caring about annoying customers as much and just go for maximum profit (see Microsoft).

        So, it seems it's mostly about economics, no?

        • (Score: 3, Informative) by Scruffy Beard 2 on Monday March 27 2017, @05:53PM (5 children)

          by Scruffy Beard 2 (6030) on Monday March 27 2017, @05:53PM (#484741)

          I suspect that software is now complex enough that carefully designed, and validated software will get to market faster than software developed ah-hoc.

          The reason is abstraction leakage. Nobody knows modern systems top-to-bottom. Coding to an interface is hard enough, but when you run into undefined behaviour because you did something the original programmer did not expect: suddenly you are chasing rabbits.

          • (Score: 3, Insightful) by Justin Case on Monday March 27 2017, @06:47PM (4 children)

            by Justin Case (4239) on Monday March 27 2017, @06:47PM (#484773) Journal

            Nobody knows modern systems top-to-bottom.

            Another reason a "modern system" cannot be reliable. You can't even predict what it is going to do under all permutations of inputs.

            • (Score: 2) by Arik on Monday March 27 2017, @07:12PM (3 children)

              by Arik (4543) on Monday March 27 2017, @07:12PM (#484794) Journal
              "Another reason a "modern system" cannot be reliable."

              So just to be clear, are you saying z/OS isn't "modern" or isn't "reliable?"
              --
              If laughter is the best medicine, who are the best doctors?
              • (Score: 2) by Justin Case on Monday March 27 2017, @07:30PM

                by Justin Case (4239) on Monday March 27 2017, @07:30PM (#484815) Journal

                From Wikipedia [wikipedia.org]:

                IBM releases ... corrections ... (a.k.a. PTFs) for z/OS.... IBM labels critical PTFs as "HIPER" (High Impact PERvasive). IBM also "rolls up" multiple patches into a Recommended Service Update (RSU).

                So yeah, maybe like Microsoft, today's version is perfect, but all those previous versions apparently were not reliable.

              • (Score: 2) by Justin Case on Tuesday March 28 2017, @12:08AM (1 child)

                by Justin Case (4239) on Tuesday March 28 2017, @12:08AM (#484942) Journal

                Really Arik? You made me a foe because I looked up your precious z/OS in Wikipedia, and discovered it wasn't perfect after all?

                • (Score: 2, Funny) by Arik on Tuesday March 28 2017, @12:42AM

                  by Arik (4543) on Tuesday March 28 2017, @12:42AM (#484961) Journal
                  I don't appreciate your trolling, I was trying to have a serious conversation.
                  --
                  If laughter is the best medicine, who are the best doctors?
        • (Score: 2) by Arik on Monday March 27 2017, @05:53PM

          by Arik (4543) on Monday March 27 2017, @05:53PM (#484742) Journal
          I see the logic and it makes sense on the surface.

          However in case after case I have seen this backfire. A properly economic evaluation would, I strongly suspect, result in a different balance - yes some things are obviously going to be done cheap and dirty but not so much, not everything which seems to be where we're going.

          So I suspect there are social factors confounding it. The decision makers don't tend to be technical, and I suspect as others have suggested that there is a systematic bias among them for paying technical types 'too much.' Even when their productivity justifies it economically, there's psychological opposition to it. Most suits would rather hire 10 "coders" at $10/hour than 2 coders at $50/hour, it just instantaneously strikes them as the obvious best decision, and it's really swimming upstream to convince them to even look at the other possibility. They think of us as commodities, that's the 'management' mindset that's drilled into them in school, and that creates a prejudice that distorts the market.
          --
          If laughter is the best medicine, who are the best doctors?
      • (Score: 2) by Justin Case on Monday March 27 2017, @06:43PM

        by Justin Case (4239) on Monday March 27 2017, @06:43PM (#484770) Journal

        we *do* know how to make highly reliable and secure software - start by defining specifications

        How do you know that the specifications are correct?

  • (Score: 0) by Anonymous Coward on Tuesday March 28 2017, @08:50PM

    by Anonymous Coward on Tuesday March 28 2017, @08:50PM (#485467)

    > Can you imagine a world where every vehicle is autonomous?

    It's easier to imagine a world in which many vehicles are centrally monitored.