Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday November 29 2021, @10:10AM   Printer-friendly
from the how-well-do-people-make-these-decisions? dept.

[NB: The following article makes reference to oft-cited Trolley problem. Highly recommended.--martyb/Bytram]

The self-driving trolley problem: How will future AI systems make the most ethical choices for all of us?:

Imagine a future with self-driving cars that are fully autonomous. If everything works as intended, the morning commute will be an opportunity to prepare for the day's meetings, catch up on news, or sit back and relax.

But what if things go wrong? The car approaches a traffic light, but suddenly the brakes fail and the computer has to make a split-second decision. It can swerve into a nearby pole and kill the passenger, or keep going and kill the pedestrian ahead.

The computer controlling the car will only have access to limited information collected through car sensors, and will have to make a decision based on this. As dramatic as this may seem, we're only a few years away from potentially facing such dilemmas.

Autonomous cars will generally provide safer driving, but accidents will be inevitable—especially in the foreseeable future, when these cars will be sharing the roads with human drivers and other road users.

Tesla does not yet produce fully autonomous cars, although it plans to. In collision situations, Tesla cars don't automatically operate or deactivate the Automatic Emergency Braking (AEB) system if a human driver is in control.

In other words, the driver's actions are not disrupted—even if they themselves are causing the collision. Instead, if the car detects a potential collision, it sends alerts to the driver to take action.

In "autopilot" mode, however, the car should automatically brake for pedestrians. Some argue if the car can prevent a collision, then there is a moral obligation for it to override the driver's actions in every scenario. But would we want an autonomous car to make this decision?


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by bradley13 on Monday November 29 2021, @10:24AM (27 children)

    by bradley13 (3053) on Monday November 29 2021, @10:24AM (#1200465) Homepage Journal

    This will be a genuine problem, and the liability question (who gets sued) will make it more exciting.

    At the same time, a step back: People increasing feel like they have the right to live in a risk-free society. Meanwhile, known risks have faded into the background.

    Suppose you live in a country where 10k people/year die in traffic accidents. That's background noise - no one talks about it, no one gets upset about it, because that's just the way it is. If autonomous cars come out, and kill 5k people per year, the populace will completely flip out. Computers killing people! OMG, Eleventy!! Even though the number of traffic deaths would have been cut in half.

    Just a random thought for a Monday morning, but: how do we deal with this?

    --
    Everyone is somebody else's weirdo.
    • (Score: 1) by shrewdsheep on Monday November 29 2021, @11:18AM

      by shrewdsheep (5215) on Monday November 29 2021, @11:18AM (#1200469)

      To the contrary, this is a non-problem. When starting your car, you will enter the ethical PIN which identifies your choices (or you use the default of the manufacturer). However you choose, you are responsible. Also the liability question is a non-problem. You are the owner and will be liable. For real autonomous cars insurance will be almost for free.

    • (Score: 1, Troll) by Mockingbird on Monday November 29 2021, @11:52AM (2 children)

      by Mockingbird (15239) on Monday November 29 2021, @11:52AM (#1200476) Journal

      Guns, Bradley, you forgot about guns! They don't kill people, people with guns kill people. AI's don't kill people, either, but AI's with 3000# of steel, well, that is another matter.

      • (Score: 0) by Anonymous Coward on Monday November 29 2021, @01:12PM (1 child)

        by Anonymous Coward on Monday November 29 2021, @01:12PM (#1200495)

        A well regulated militia must include self-driving cars. How are you supposed to keep the militia well-regulated if you can't assign them self-driving vehicles? I think The 2nd covers this just as sufficiently as it covers bear's arms.

    • (Score: 2) by coolgopher on Monday November 29 2021, @12:29PM

      by coolgopher (1157) on Monday November 29 2021, @12:29PM (#1200483)

      Let computers kill people until the problem goes away? 🤔

    • (Score: 5, Insightful) by Thexalon on Monday November 29 2021, @12:36PM (2 children)

      by Thexalon (636) on Monday November 29 2021, @12:36PM (#1200487)

      Suppose you live in a country where 10k people/year die in traffic accidents. That's background noise - no one talks about it, no one gets upset about it, because that's just the way it is.

      So I have to say, I for one find that, shall we say, morally odd. There are other approaches, e.g. the Netherlands investigates traffic accidents the way the US investigates plane crashes, and as a result death and injury by car crash is about 1/5 as common as in the US.

      The US approach to dealing with this problem is more stringent safety rules on cars. Which definitely helps (I've seen people survive accidents with minor injuries that would have killed them 20 years ago), but only if you're in a car. If you're walking down the street, on a bicycle, on a motorcycle, then this does nothing at best and arguably gives drivers false confidence and makes you more at risk. But what they won't do is address unsafe road designs, because that's expensive and impedes idiots' freedom to travel 85 mph down a country road.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 1) by khallow on Monday November 29 2021, @05:35PM (1 child)

        by khallow (3766) Subscriber Badge on Monday November 29 2021, @05:35PM (#1200600) Journal
        It depends on the states which investigate with varying degrees of diligence. Some states are no fault [wikipedia.org], meaning law enforcement doesn't assign blame in a traffic accident (though you can still be ticketed, sued, jailed, etc) and insurance can't penalize you for being at fault in such an accident.

        But what they won't do is address unsafe road designs, because that's expensive and impedes idiots' freedom to travel 85 mph down a country road.

        And road damage. Maintenance is much less sexy an issue than new road and bridge construction and it shows in the funding.

        • (Score: 2) by Thexalon on Monday November 29 2021, @07:54PM

          by Thexalon (636) on Monday November 29 2021, @07:54PM (#1200655)

          And road damage. Maintenance is much less sexy an issue than new road and bridge construction and it shows in the funding.

          No argument there.

          I'll also note that road maintenance, at least in my area, has a long history of public corruption. It's the kind of work where it's very easy to hire a politically-connected company who half-asses it at best and kicks some of the money back to whichever public officials they need to in order to ensure they don't get caught, including the people responsible for hiring them. And the public won't even notice that there's a problem at first, because most road problems take a while to develop, and if they do notice a problem the politician can always claim it's on the job list but somewhere else is always higher up on the list. And no, this phenomenon isn't limited to any particular political party.

          --
          The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 4, Informative) by helel on Monday November 29 2021, @03:13PM (7 children)

      by helel (2949) on Monday November 29 2021, @03:13PM (#1200533)

      This will be a genuine problem

      No, this is not a genuine problem. Most drivers never find themselves facing a "save myself or save another" scenario*, ever, and in the rare instance when they do they do it's because it humans take so long to move from information to processing to decision to action. When circumstances change, such as a child jumping into the street, the time it takes an AI to detect the obstacle and apply the brake is an order of magnitude faster than the best human drivers.

      * "Unintentional Injury" accounts for about 5% of deaths, overall. If we assume every one of those occurs in traffic and that every single person chooses to sacrifice themselves then we can assume that all 5% are people who have faced a "save myself or save another" scenario and therefore that 95% have not faced such a scenario.

      • (Score: 0) by Anonymous Coward on Monday November 29 2021, @10:05PM (2 children)

        by Anonymous Coward on Monday November 29 2021, @10:05PM (#1200712)

        > the time it takes an AI to detect the obstacle and apply the brake is an order of magnitude faster than the best human drivers.

        Citation please? I haven't seen anything like this reported. At least at the current state of development I don't think this is true.

        • (Score: 2) by helel on Tuesday November 30 2021, @03:04AM (1 child)

          by helel (2949) on Tuesday November 30 2021, @03:04AM (#1200772)

          Well, it seems I was off for the current crop of experimental AI. "...the reaction time for Pony.ai’s autonomous driving system is around 0.1 second, whereas human reaction time clocks in at between 0.4 to 1.1 seconds" [qz.com], so they'll need to cut that in half for the production model, I guess?

          AI has issues that need to be addressed but the trolly problem isn't one of them.

          • (Score: 0) by Anonymous Coward on Tuesday November 30 2021, @09:51PM

            by Anonymous Coward on Tuesday November 30 2021, @09:51PM (#1200990)

            People seem to focus inappropriately on the negative sometimes without considering the likelihood. In this case, there's little to be gained other than being first to market for being irresponsible and a lot to be lost in terms of reputation. There may be a slight uptick in this sort of thing when the systems are put into the hands of normal drivers, but probably not enough to worry about.

            Over the long term, this will likely go the way of airliner crashes where fewer and fewer of them have happened over time and we'll probably get to the point in the relative near future where they basically don't happen at all in many years.

      • (Score: 2) by legont on Tuesday November 30 2021, @05:53AM (1 child)

        by legont (4179) on Tuesday November 30 2021, @05:53AM (#1200790)

        A few months ago I posted my dash cam video of hard braking for somebody running across the street. I managed to react fast enough and avoid the guy. A friend of mine replied that when his child is on the back seat, he never hard brakes, period. That's because he could be hit from behind and it's dangerous for the child. As per the the law breaking pedestrian kids running, tough luck to them. The cam has all the proof he needs.
        I am sure he will never use a self driving car.
        Once again, he is ready to kill on off-chance his child is in danger. This decision is made up front. Nothing to think about. Gentle breaking no matter what. I am sure most soccer moms do the same.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
        • (Score: 1, Informative) by Anonymous Coward on Tuesday November 30 2021, @02:20PM

          by Anonymous Coward on Tuesday November 30 2021, @02:20PM (#1200854)

          Your friend is an idiot. Kids are supposed to be in car seats that will protect them if he's hit from behind. What's more, there are times where hard braking might be required such as to prevent a T-bone collision.

      • (Score: 2) by sjames on Tuesday November 30 2021, @05:06PM

        by sjames (2882) on Tuesday November 30 2021, @05:06PM (#1200899) Journal

        Part of it is that we rarely question if a driver took the optimal approach or simply the immediately obvious approach as long as there are signs they did something in reaction to the sudden obstruction. That's probably for the best, it's a lot easier to come up with the perfect solution sitting at a desk reviewing the relevant facts with a handy spreadsheet than it is to do the same in under a second after being surprised. We don't hold a human driver responsible for making the wrong "trolley problem" decision because it isn't clear they even had time to be aware of the option.

        The question is does an AI need that much slack and are we willing to cut it any. It may have to make decisions we don't expect of a human driver simply because it potentially CAN.

      • (Score: 0) by Anonymous Coward on Wednesday December 01 2021, @01:13AM

        by Anonymous Coward on Wednesday December 01 2021, @01:13AM (#1201041)

        I live in Vancouver, Canada.

        I have personally had to make "save myself and risk another's death, or not" decisions while driving twice, in two decades of driving. I'm a very rare driver though, maybe once a week. Both times I survived and by chance nobody else was maimed or killed, but I definitely had the "oh shit well I hope they move" thought both times as I self-preserved.

        We don't get much snow/ice and when that happens drivers are terrible, so I don't include near-collisions in those conditions.

        I expect the numbers are lower rurally.

    • (Score: 3, Interesting) by JoeMerchant on Monday November 29 2021, @03:26PM (2 children)

      by JoeMerchant (3937) on Monday November 29 2021, @03:26PM (#1200542)

      The perpetual conundrum for me is:

      1) We have the world the way it is, with established risk and liability profiles.

      2) We can improve the world by changing these profiles to have lower risk and lower total liability.

      3) What do we do for the people who get screwed in the "Brave new world" where, overall, society is better off, but the new system has either randomly or systematically screwed you in a way that wouldn't have happened in the older, more dangerous world?

      4) Furthermore: how do we stop the benefactors of the new system from suppressing all evidence that some (perhaps millions of) people are in fact getting screwed by the new system?

      5) Given the virtual inevitability of 4), what is our threshold of demonstrated benefit (in 2) before we are willing to make the change rather than sticking with the status quo?

      In an oligarchy (such as some "first world" countries are either currently operating or fast becoming), the threshold for change is well below 50% of the population - the change only has to benefit the majority of the people in power, the rest are basically along for the ride.

      --
      🌻🌻 [google.com]
      • (Score: 2) by legont on Tuesday November 30 2021, @05:59AM (1 child)

        by legont (4179) on Tuesday November 30 2021, @05:59AM (#1200791)

        Yep. The AI will kill it's passengers so VIP car can go through smoothly. Even if for some miracle it's not the case, there is no way the population can be assured it's not happening.

        --
        "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
        • (Score: 2) by JoeMerchant on Tuesday November 30 2021, @12:56PM

          by JoeMerchant (3937) on Tuesday November 30 2021, @12:56PM (#1200835)

          Traffic lights in Miami were screwing the commuters to make way for Presidential motorcades in the 1980s... I'm sure they're even more sophisticated now, and would not be shocked at all if there is some pay for access channel to get your VIPs through major cities without waiting for red lights.

          --
          🌻🌻 [google.com]
    • (Score: 3, Interesting) by choose another one on Monday November 29 2021, @03:33PM

      by choose another one (515) Subscriber Badge on Monday November 29 2021, @03:33PM (#1200546)

      It won't really be more exciting - it'll just be accounting.

      Having drivers liable when there is no driver is pretty much a non-starter, so the liability will sit with mfr or mfr's insurer (owners may well have to pay a subscription to fund this long term).

      The car will identify all potential victims using facial recognition and/or cell phone tracking, do the maths as to who will cost least in court and act as per the result. in the unlikely case that all possible victim sets have the same cost, it doesn't matter so the car can have free choice.

      In short: it'll kill the poorer people. Expect the cars to eventually acquire self destruct systems to render vehicle and contents into non-lethal sized pieces in the event that the passengers are the poor ones.

    • (Score: 2) by DannyB on Monday November 29 2021, @04:27PM (5 children)

      by DannyB (5839) Subscriber Badge on Monday November 29 2021, @04:27PM (#1200571) Journal

      Suppose you live in a country where 10k people/year die in traffic accidents.

      In the US, there were 36,120 traffic deaths in 2019. [wikipedia.org] A number which seems to be decreasing year after year according to the linked source.

      In the US, there were 345,323 COVID-19 deaths in 2020. [jamanetwork.com]


      Current Americans dead of COVID-19 this year is approx 353,000, so far [ny1.com]. Total about 770K both years. Or more than 1 in 500 Americans dead of COVID-19. You'll note the deaths so far this year are higher than last year's total. (so it MUST clearly be Biden's fault!)

      Relatively speaking, how much societal resources should be put into concerns about number of deaths from these (and other) separate causes?

      but: how do we deal with this?

      How many people will conclude: as long as the self driving AI doesn't pester me about how it makes the ethical decision, I don't care which choice it makes.

      Should self driving vehicles be able to make an ethical decision that it would be better to spare all others at the cost of the vehicle occupant's life?

      --
      To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
      • (Score: 2) by maxwell demon on Monday November 29 2021, @07:18PM (4 children)

        by maxwell demon (1608) on Monday November 29 2021, @07:18PM (#1200636) Journal

        How many people will conclude: as long as the self driving AI doesn't pester me about how it makes the ethical decision, I don't care which choice it makes.

        Those who are driven by self-driving cars will certainly prefer the pedestrian being killed to themselves being killed.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by DannyB on Monday November 29 2021, @07:42PM (2 children)

          by DannyB (5839) Subscriber Badge on Monday November 29 2021, @07:42PM (#1200652) Journal

          If you read those two sentences together you get (summary):
          1. some vehicle occupants care nothing for anyone else's life
          2. vehicles should (especially in sentence 1) be able to value the occupant's life less than those the vehicle might strike

          --
          To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
          • (Score: 2) by maxwell demon on Monday November 29 2021, @11:56PM (1 child)

            by maxwell demon (1608) on Monday November 29 2021, @11:56PM (#1200740) Journal

            Your point 1 doesn't follow. That point would mean those occupants would see no difference between the pedestrian dying and nobody dying. All you can derive from the sentences is that some vehicle occupants value their own life more than anybody else's life. Or more accurately, they value their own life more than the life of a random person.

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 2) by DannyB on Tuesday November 30 2021, @02:42PM

              by DannyB (5839) Subscriber Badge on Tuesday November 30 2021, @02:42PM (#1200861) Journal

              In addition to those you listed, the vehicle occupant may value no lives at all, including their own. The No Lives Matter movement.

              --
              To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
        • (Score: 0) by Anonymous Coward on Tuesday November 30 2021, @09:56PM

          by Anonymous Coward on Tuesday November 30 2021, @09:56PM (#1200991)

          The pedestrian is also the most likely to be killed as there are essentially no pedestrians on the freeway and relatively few on highways in most areas. The result is that if there is a legit decision of passengers versus pedestrian, it's far more likely that the pedestrians would be the ones to get crushed. The kind of head on collision that's most likely to result from an AI driver deciding to avoid the pedestrian are also some of the most survivable crashes you can have. You'd likely be looking at a head on collision with a fixed object at relatively low speed. I don't recommend running into a fixed object at 30-40mph, but the odds of surviving it are pretty good in a modern car.

          As a result, in most cases, the correct decision would be for the car to crash into something rather than to run over the pedestrian, as that would typically maximize the likelihood of everybody involved surviving in good condition.

    • (Score: 0) by Anonymous Coward on Monday November 29 2021, @07:14PM

      by Anonymous Coward on Monday November 29 2021, @07:14PM (#1200634)

      I agree. The self driving trolley problem comes up all the time, but I just don't care at all.

      Self driving cars are going to save tens of thousands of lives every year. They are also going to kill some handful of people. But overall, they should kill far fewer people than the current situation with human drivers. Consequently, we could program the cars to intentionally run over some children every year as a blood sacrifice to the Machine Gods, and we'd still come out way ahead on the number of people being killed in car crashes.

      And when humans do get into a crash, we aren't any good at thinking through the effects of what's happening in a few milliseconds. It's usually just a moment of panic and confusion and then somebody is dead. But we don't expect drivers to make a complex ethical calculus in that moment. Even if we did, we don't expect the driver to be able to reliably control the vehicle during a crash well enough to reach the "optimal" death. Likewise, when a crash happens with an AI car, shit is already going wrong, and I think it's unreasonable to expect that the AI is going to successfully understand the situation, calculate an optimal outcome, and control the car well enough to reach that outcome. If the AI were so good at doing all of those things, we wouldn't expect it to be getting into that crash in the first place! It's broken at that point. It already failed. Put the engineering resources into preventing such failures -- don't waste time and resources focusing on trying to have the most ethical failure mode. That's just like trying to find the cleanest way to mud wrestle a pig. The most ethical solution is just to try to reduce the number of cases where you have to make that ethical calculus on the fly.

      Anything else is wanking by philosophers.

  • (Score: 2, Insightful) by end rant on Monday November 29 2021, @11:01AM (5 children)

    by end rant (15943) on Monday November 29 2021, @11:01AM (#1200468)

    ...and the computer has to make a split-second decision.

    Who says?

    The trollies and trains that I've seen just plow into everything in their way.

    • (Score: 2) by Runaway1956 on Monday November 29 2021, @02:47PM (3 children)

      by Runaway1956 (2926) Subscriber Badge on Monday November 29 2021, @02:47PM (#1200525) Journal

      The computer probably decided before it left the garage how many people it would kill today.

      • (Score: 2) by DannyB on Monday November 29 2021, @04:34PM (2 children)

        by DannyB (5839) Subscriber Badge on Monday November 29 2021, @04:34PM (#1200574) Journal

        The computer cannot know before the journey how many targets there will be along the way.

        Pedestrians also have various difficulty levels to achieve a successful collision. A point system could be devised where vehicles cooperate to vote each pedestrian a different point value. eg, that person is 5 points, that other one 8 points, etc.

        --
        To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
        • (Score: 2) by Runaway1956 on Monday November 29 2021, @08:28PM (1 child)

          by Runaway1956 (2926) Subscriber Badge on Monday November 29 2021, @08:28PM (#1200673) Journal

          To continue my sarcasm from above: What if artificial intelligences are omniscient? Then they could easily decide which people to run over!

          • (Score: 3, Funny) by DannyB on Monday November 29 2021, @08:58PM

            by DannyB (5839) Subscriber Badge on Monday November 29 2021, @08:58PM (#1200688) Journal

            If you can develop an omniscient artificial intelligence, there is probably a huge market for it. Especially in the burgeoning field of study about what the Kardashians are up to.

            --
            To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    • (Score: 2) by Thexalon on Monday November 29 2021, @08:05PM

      by Thexalon (636) on Monday November 29 2021, @08:05PM (#1200658)

      The trollies and trains that I've seen just plow into everything in their way.

      In the case of trains, the problem is physics: They can't turn, because they're trains. With all the brakes applied at maximum, it can take a couple of kilometers for an even-relatively-slow train to stop. Which means that not infrequently, by the time the train engineer can see that there's a problem, it's too late to do anything about it.

      This is why something that should be consistently on the list of major infrastructure projects is the elimination of level crossings between rail and road and replacing them with either rail bridges over the road, road bridges over the rails, or no crossing at all for the road if there's a nearby bridge available. And maybe some sort of camera-feed system that shows the train driver what's going on in upcoming intersections until that work can happen.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
  • (Score: 4, Touché) by Ingar on Monday November 29 2021, @11:24AM (11 children)

    by Ingar (801) on Monday November 29 2021, @11:24AM (#1200470) Homepage

    THE AI IS LICENSED “AS IS.” YOU BEAR THE RISK OF USING IT. COMPANY GIVES NO EXPRESS WARRANTIES, GUARANTEES, OR CONDITIONS. TO THE EXTENT PERMITTED UNDER APPLICABLE LAWS, COMPANY EXCLUDES ALL IMPLIED WARRANTIES, INCLUDING MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, AND NON-INFRINGEMENT.

    • (Score: 5, Informative) by Zinho on Monday November 29 2021, @11:47AM (7 children)

      by Zinho (759) on Monday November 29 2021, @11:47AM (#1200474)

      And this is the difference between programmers and proper Engineers.

      If Computer Scientists want to legitimately take on the mantle of "Software Engineer" then they have to start shouldering the burden of responsibility for public safety and accept accountability for risks caused by the products they create and put in the public space.

      --
      "Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
      • (Score: 4, Interesting) by Ingar on Monday November 29 2021, @12:51PM

        by Ingar (801) on Monday November 29 2021, @12:51PM (#1200489) Homepage

        Once, I was testing a release candidate, and found a show-stopping bug. I asked the manager if we should ship the release with the bug,
        or delay the release and fix it. The answer was "your code shouldn't have bugs". So I shipped it.

        On top of that, in my native language, "engineers" are the guys who build bridges and do rocket science. I had the pleasure of
        working with some of them and their software solutions usually come down to "more concrete". Software design should be done by proper computer scientists.

      • (Score: 2) by choose another one on Monday November 29 2021, @03:22PM

        by choose another one (515) Subscriber Badge on Monday November 29 2021, @03:22PM (#1200536)

        I get it now - Software Engineers don't use GPL...

      • (Score: 4, Insightful) by Thexalon on Monday November 29 2021, @05:03PM

        by Thexalon (636) on Monday November 29 2021, @05:03PM (#1200582)

        Forget just the individual responsible on the software engineers: So long as the kinds of language GP described is part of the EULA that conveniently releases them from any legal liability for anything that goes wrong, companies have no incentive to create proper quality control.

        And yes, software is complicated, but so is most other stuff engineers do, so we should in fact be sucking it up and putting it through its paces before it handles anything life-or-death. And we know for a fact it's possible to do this on an organizational level, because NASA has done it [fastcompany.com].

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 2) by DannyB on Monday November 29 2021, @05:22PM (3 children)

        by DannyB (5839) Subscriber Badge on Monday November 29 2021, @05:22PM (#1200589) Journal

        If Computer Scientists want to legitimately take on the mantle of "Software Engineer" then they have to start shouldering the burden of responsibility for public safety and accept accountability for risks caused by the products they create and put in the public space.

        As a matter of engineering, one could calculate the odds of various kinds of failure modes that might cause injury or death either to occupants or pedestrians or cause property damage.

        What this discussion seems largely about is ethics. Not engineering. Should or how should software be designed to react in the actual event of certain failure modes.

        --
        To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
        • (Score: 1, Insightful) by Anonymous Coward on Monday November 29 2021, @05:44PM

          by Anonymous Coward on Monday November 29 2021, @05:44PM (#1200606)

          And why is the problem handed back to programmers? This is a moral, philosophical question. You fucking solve it then come and ask us to program it.

        • (Score: 4, Informative) by Zinho on Monday November 29 2021, @05:50PM

          by Zinho (759) on Monday November 29 2021, @05:50PM (#1200609)

          What this discussion seems largely about is ethics. Not engineering.

          As a licensed Professional Engineer, I don't have the luxury of separating those two parts of the job. If I build a structure that will be used by the public and it fails because I shirked my due diligence, I am legally responsible for damages caused. I have a fancy stamp and everything, I call it my "double-check my work while contemplating poverty" stamp.

          Real Engineers have a responsibility to ensure their products do not cause unnecessary harm to the public. Failure modes for engineered products (cars, bridges) have an easily calculable cost in human lives, and the only weasel words we get are "when operating within specified conditions" and "within warranty period".

          --
          "Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
        • (Score: 0) by Anonymous Coward on Wednesday December 01 2021, @01:25AM

          by Anonymous Coward on Wednesday December 01 2021, @01:25AM (#1201043)

          certain failure modes

          A pedestrian crossing unexpectedly is not a failure mode, almost no accidents are due to failure modes.

          Brakes stop working? Power steering cuts out? THOSE are failure modes leading to accidents.

          Self-driving AI needs to deal with operating conditions in which poor performance could have human life costs.

    • (Score: 2, Insightful) by Anonymous Coward on Monday November 29 2021, @03:07PM (1 child)

      by Anonymous Coward on Monday November 29 2021, @03:07PM (#1200531)

      Indeed!

      "By entering this vehicle you indicate that you accept our terms of service [link to TOS TLDR here]...."

      --
      Then the obvious solution to the trolley problem is to allow the vehicle's passengers to die in the accident, since - by entering the vehicle - they have agreed to accept the chance of being killed by its operation. The pedestrians etc outside the vehicle have _not_ agreed to any contract governing the operation of the vehicle, and hence may not be killed by the AI's operation.

      • (Score: 2) by DannyB on Monday November 29 2021, @05:22PM

        by DannyB (5839) Subscriber Badge on Monday November 29 2021, @05:22PM (#1200590) Journal

        By entering this vehicle, you accept the EULA, which you will be provided at the end of the ride.

        --
        To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    • (Score: 0) by Anonymous Coward on Thursday December 02 2021, @03:38AM

      by Anonymous Coward on Thursday December 02 2021, @03:38AM (#1201350)

      Too bad the person you ran into didn't sign it. The car maker cannot wave liability they owe to third parties in an agreement you sign.

  • (Score: 2, Insightful) by Anonymous Coward on Monday November 29 2021, @11:46AM (21 children)

    by Anonymous Coward on Monday November 29 2021, @11:46AM (#1200473)

    This is an absolute no brainer. If the car is programmed to kill the passengers, no one will use it, and more people will die. You would kill thousands of people by discouraging them from using the system, to save maybe a dozen people in silly corner cases.

    • (Score: 2) by Snotnose on Monday November 29 2021, @11:55AM (5 children)

      by Snotnose (1623) on Monday November 29 2021, @11:55AM (#1200477)

      Not even that. I don't have much sympathy for pedestrians who don't keep an eye on oncoming traffic. I don't care if it's failed brakes or mom yelling at the kids in the backseat. Stuff happens and you need to be aware of your surroundings, not your social media.

      --
      When the dust settled America realized it was saved by a porn star.
      • (Score: 2) by Runaway1956 on Monday November 29 2021, @02:52PM (4 children)

        by Runaway1956 (2926) Subscriber Badge on Monday November 29 2021, @02:52PM (#1200527) Journal

        While I tend to agree with you that pedestrians should remain alert, if the car leaves the roadway and comes barreling down the sidewalk, you can't blame the old couple who got in the way. Or the little kids. Healthy adults will probably fare better than the old and the young, but it's still not their fault when the AI runs them over. Someone should be liable, and those someones include the driver as well as the AI vendors.

        • (Score: 2) by DannyB on Monday November 29 2021, @05:25PM (3 children)

          by DannyB (5839) Subscriber Badge on Monday November 29 2021, @05:25PM (#1200591) Journal

          The situation here is an accident. Not a deliberate act. There was a failure leading to an important decision having to be made. It seems like it should be an insurance matter rather than thinking of it as being a deliberate act causing injury.

          Sometimes bad things happen.

          You don't get your brakes serviced. Ever. They eventually fail. Your car AI must make a difficult decision.

          --
          To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
          • (Score: 2) by maxwell demon on Monday November 29 2021, @07:25PM (1 child)

            by maxwell demon (1608) on Monday November 29 2021, @07:25PM (#1200639) Journal

            You don't get your brakes serviced. Ever.

            If you don't ever get your brakes serviced, it's easy to decide who's liable: You, for neglecting proper care of the brakes.

            --
            The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 2) by DannyB on Monday November 29 2021, @07:44PM

              by DannyB (5839) Subscriber Badge on Monday November 29 2021, @07:44PM (#1200653) Journal

              Yes. That is the intended obvious conclusion.

              --
              To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
          • (Score: 2) by Runaway1956 on Monday November 29 2021, @08:26PM

            by Runaway1956 (2926) Subscriber Badge on Monday November 29 2021, @08:26PM (#1200672) Journal

            I was thinking in terms of accidents. Something really weird and strange happens, you car gets confused, and drives down the sidewalk. It may seem that I was referring to recent events with a crazed killer at the wheel, but I wasn't.

    • (Score: 1, Interesting) by Anonymous Coward on Monday November 29 2021, @01:27PM (9 children)

      by Anonymous Coward on Monday November 29 2021, @01:27PM (#1200499)

      In the long term, I'm not even sure why people expect this will come up. Yes, perhaps rarely, but there's freak accidents that currently happen. The main situations where people are currently killed are situations where people failed to make wise decisions. AI shouldn't suffer from the same issues. This is more about how to handle when the AI craps out and dumps things back to the meat sack behind the wheel.

      • (Score: 1) by khallow on Monday November 29 2021, @02:29PM (1 child)

        by khallow (3766) Subscriber Badge on Monday November 29 2021, @02:29PM (#1200519) Journal

        In the long term, I'm not even sure why people expect this will come up.

        In the courts. Over and over. If you don't expect that, then you don't recognize a large part of the problem.

        It's not just the technology that needs to be relatively "ethical" it's the technology's interactions with legal systems.

        • (Score: 0) by Anonymous Coward on Tuesday November 30 2021, @05:16AM

          by Anonymous Coward on Tuesday November 30 2021, @05:16AM (#1200787)

          Virtually all crashes are the result of people doing stupid stuff, either they don't drive responsibly, or they don't maintain their stuff. The whole idea that these crashes just happen is ridiculous. It's why so much driver education these days is focused on defensive driving. Situations where you couldn't see it coming are rare and with the sensors that are being integrated into self-driving cars, a lot less likely. What's more, a car is more likely to be willing to slow down when the conditions call for it than a person would be.

          A typical crash will be the result of too many risk factors being pushed too far until things go bad, an AI car is far less likely to push things that far, so you'd be foolish to think that it would come up with any meaningful frequency. Just look at airplanes. We talk about the crashes for years in large part because it is such a rare occurrence. The US now goes entire years without having a commercial passenger jet crash. Road driving cars are a bit harder to design to that level, but it will eventually be achieved; the problem isn't that hard to solve. The issue is solving it with the computing power and technology that we have right now. But, given time that will change.

      • (Score: 2) by Runaway1956 on Monday November 29 2021, @02:53PM (6 children)

        by Runaway1956 (2926) Subscriber Badge on Monday November 29 2021, @02:53PM (#1200528) Journal

        I believe an AI decided to drive underneath a tractor trailer. And another decided to ram into a fire truck. AIs will remain as fallible as programmers, for a long, long time.

        • (Score: 0) by Anonymous Coward on Monday November 29 2021, @05:49PM (5 children)

          by Anonymous Coward on Monday November 29 2021, @05:49PM (#1200608)

          You can't just hand programmers an insoluble problem and say "fix it". I know you grew up watching Star Trek and Scotty could always reverse the dilithium polarity and make time go backwards for a few tense minutes, but no.

          • (Score: 2) by Runaway1956 on Monday November 29 2021, @08:20PM (4 children)

            by Runaway1956 (2926) Subscriber Badge on Monday November 29 2021, @08:20PM (#1200669) Journal

            Errrr, uhhhhh, missing a tractor trailer and/or a fire truck is an insoluble problem? I miss them all the time!

            Granted, it's not only the programmers at fault here. I've dumped on Musk for insisting that his cars need only visible light cameras several times. The number and placement of sensors isn't up to programmers, either. My "solutions" might add a thousand or three dollars to the cost of the car, but my ideas on the subject of sensors would go a long way toward solving some of the obvious problems. Give that car all the visible light optical sensors you want - but add radar and lidar. Throw in a laser sensor or two. Don't restrict the computerized driver to the same senses that humans have - give them the advantages that we missed out on. Infrared sensors couldn't hurt anything either - if every other sensor onboard the vehicle missed that kid stepping behind the car as you're about to back up, the infrared should notice him. I don't know how ultraviolet sensors could help, but if anyone can see a use, I'll get behind that too.

            • (Score: 2) by maxwell demon on Tuesday November 30 2021, @12:08AM (3 children)

              by maxwell demon (1608) on Tuesday November 30 2021, @12:08AM (#1200743) Journal

              Infrared is a very good point. A person walking at night in dark clothes on the side of the road is hard to recognize both for humans and for computers analysing camera images in visible light. Under most conditions, that same person should be shining brightly over a dark background in infrared.

              While we're at it, why not also add ultrasound sensing to the mix? If it works for bats, it should also work for self-driving cars, shouldn't it?

              --
              The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 0) by Anonymous Coward on Tuesday November 30 2021, @02:51AM (1 child)

                by Anonymous Coward on Tuesday November 30 2021, @02:51AM (#1200765)

                Ultrasound, or echolocation?

                • (Score: 2) by maxwell demon on Wednesday December 01 2021, @09:07AM

                  by maxwell demon (1608) on Wednesday December 01 2021, @09:07AM (#1201118) Journal

                  Echolocation using ultrasound.

                  --
                  The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 0) by Anonymous Coward on Tuesday November 30 2021, @09:59PM

                by Anonymous Coward on Tuesday November 30 2021, @09:59PM (#1200994)

                One of the advantages of AI cars is that you can string together many different sensors. In all likelihood though, in that situation, the car would drive more slowly than normal to match the conditions.

    • (Score: 2) by dwilson on Monday November 29 2021, @03:26PM (4 children)

      by dwilson (2599) Subscriber Badge on Monday November 29 2021, @03:26PM (#1200543) Journal

      A no-brainer, indeed. The first and foremost responsibility of the computer driving the car is to protect the car and it's passengers, in much the same way that a lawyer representing a client has a duty and responsibility to that client.

      Even if there's a non-passenger's life on the line in the event of a brake failure, and even if the client is an axe murderer with a long history of conviction and re-offense.

      Anything else would not be logically consistent with the precedents already present in our society's long history.

      --
      - D
      • (Score: 2) by VanessaE on Monday November 29 2021, @05:39PM

        by VanessaE (3396) <vanessa.e.dannenberg@gmail.com> on Monday November 29 2021, @05:39PM (#1200603) Journal

        But wouldn't the AI also be programmed to obey all laws pertaining to pedestrians? In every locale I've ever lived, a pedestrian up ahead has right of way over cars/drivers (assuming he/she is crossing legally and didn't just jump out in front of the cars).

        If the AI can't establish that the pedestrian is there illegally, then wouldn't it be required to aim for the utility pole, saving the pedestrian's life at the cost of the driver's?

        Isn't that what the law requires of a human driver (assuming enough time to make a decision)?

      • (Score: 2) by Runaway1956 on Monday November 29 2021, @08:23PM (2 children)

        by Runaway1956 (2926) Subscriber Badge on Monday November 29 2021, @08:23PM (#1200671) Journal

        protect the car and it's passengers

        Some parsing of words might be in order here. The car is not obligated under any circumstances to protect itself. I can't imagine, at this moment, how a car might choose to preserve itself over preservation of a passenger. BUT, preservation of the passenger should always take precedence over that of the car. Ditto with pedestrians - given the choice of driving into a ditch, or hitting the pedestrian, the car goes into the ditch, suffering whatever damage that may entail. Human life always trumps the preservation of the vehicle itself.

        • (Score: 2) by legont on Tuesday November 30 2021, @06:19AM (1 child)

          by legont (4179) on Tuesday November 30 2021, @06:19AM (#1200795)

          The technical issue is in details. Death of the passengers are probability events. There are also injuries. They are very much different for males, females and children. The last's two necks could be broken by hard breaking alone. Even man's can if he sleeps. Should the car consider if it's passengers asleep and/or relaxed and adjust the probabilities accordingly? Or should it use zero tolerance policy and not brake at all? What about weather adjustments?
          All those decisions are made by humans now and most of them are rather good to other humans. Which way do we program the AI?

          --
          "Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
          • (Score: 0) by Anonymous Coward on Tuesday November 30 2021, @10:03PM

            by Anonymous Coward on Tuesday November 30 2021, @10:03PM (#1200996)

            The point is that the AI car wouldn't need to consider those things, the only situation where it could be an issue is if somebody steps out from behind something directly into it's line of motion and even that shouldn't happen a lot as the car should be driving more slowly if it can't sense far enough away to worry about that.

  • (Score: 1, Insightful) by Anonymous Coward on Monday November 29 2021, @11:50AM (1 child)

    by Anonymous Coward on Monday November 29 2021, @11:50AM (#1200475)

    Obviously the passenger in the exceedingly expensive self-driven car is more necessary to society than someone who doesn't own one and has to walk.

    • (Score: 2) by maxwell demon on Monday November 29 2021, @07:30PM

      by maxwell demon (1608) on Monday November 29 2021, @07:30PM (#1200643) Journal

      The self-driving car is a cheap taxi (cheap because you don't have to pay a human driver), and the pedestrian is a wealthy tourist.

      --
      The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 0) by Anonymous Coward on Monday November 29 2021, @12:10PM (3 children)

    by Anonymous Coward on Monday November 29 2021, @12:10PM (#1200480)

    The car must be programmed, in the case of ethical conflict, to kill as many people as possible. In the example from the article, the car must determine if there's a way to hit the pole, killing the passenger, but hit it in such a way as to bounce off and kill the pedestrian as well. If so, this is the action it will take. The car should also burst its batteries and quickly ignite, the flames engulfing both the passenger and pedestrian just to be sure.

    • (Score: -1, Troll) by Anonymous Coward on Monday November 29 2021, @01:13PM (1 child)

      by Anonymous Coward on Monday November 29 2021, @01:13PM (#1200496)

      Not needed. In Equitable America, we now release violent felons from prison to do that.

      • (Score: 1) by khallow on Monday November 29 2021, @02:31PM

        by khallow (3766) Subscriber Badge on Monday November 29 2021, @02:31PM (#1200520) Journal
        Sounds like you might need to compare the numbers of violent US ex-felons with the number of vehicles on US roads to understand the original recommendation better.
    • (Score: 1, Insightful) by Anonymous Coward on Monday November 29 2021, @03:40PM

      by Anonymous Coward on Monday November 29 2021, @03:40PM (#1200551)

      So if the AI kills anyone by accident, it should quickly eliminate all witnesses to the event. Got it.

  • (Score: 0) by Anonymous Coward on Monday November 29 2021, @12:16PM

    by Anonymous Coward on Monday November 29 2021, @12:16PM (#1200481)
  • (Score: 5, Insightful) by Anonymous Coward on Monday November 29 2021, @12:22PM (1 child)

    by Anonymous Coward on Monday November 29 2021, @12:22PM (#1200482)

    "We applied the brakes as hard as we could, but it just wasn't enough."

    AI is just object detection and classification. There's scarcely any more to it than that, anywhere. Self-driving vehicles -- can't. They just can't. There's no intelligence to be had, they can only even occasionally avoid obstacles. In almost all cases, they stop before avoiding obstacles -- that's what they do: they stop, and return control to an I. AI just can't.

    All that is required of *anyone* right now, much less AI, is *STOP*! You can't reasonably know the affect of doing otherwise. You can't reasonably be expected to know what any other course of action might entail, nor can you reasonably be expected to take into account everything all at once during an emergency -- the default action is "Apply the brakes." It will be dozens of years before vehicle-based AI can consider anything more than this in the vast majority of scenarios, and that's only if the vehicle manufacturer has enough AI training time and computational power available in each vehicle to consider such events.

    For now, and until it is legislated otherwise, the *ONLY* viable/practical/available mode of handling an emergency is *STOP WHERE YOU ARE*. Not a single vehicle will steer to avoid an obstacle in its path as opposed to impacting the object. All objects are equal -- and all objects are simply "objects" (or "crossing the road, wait" or "moving in bike lane, wait for object to remove itself"). Apply brakes, impact, deploy airbag. When it is legislated otherwise, you will have manufacturers claiming an inability to train for such scenarios, impossible-to-predict (and thus train for) scenarios, barrier-to-entry for start-ups, and so on. I don't believe there will ever be a requirement for computer-driven vehicles that they do anything more than apply the brakes should an object be in or veer into their path.

    • (Score: 2) by Common Joe on Tuesday November 30 2021, @05:07PM

      by Common Joe (33) <common.joe.0101NO@SPAMgmail.com> on Tuesday November 30 2021, @05:07PM (#1200900) Journal

      Before I play devil's advocate, I need to say that I fully agree with you and you deserve the 5 insightful points.

      However, I can easily see this being abused. Once a trick is found to make the AI car believe it needs to stop (like pointing lasers at the car sensors while they are traveling down the Interstate), it will brake hard. This will force other cars behind it to break hard. The pranks and traffic jams caused by this will be popcorn worthy.

  • (Score: 1) by negrace on Monday November 29 2021, @12:36PM

    by negrace (4010) on Monday November 29 2021, @12:36PM (#1200486)
  • (Score: 3, Interesting) by oumuamua on Monday November 29 2021, @02:33PM (8 children)

    by oumuamua (8401) on Monday November 29 2021, @02:33PM (#1200522)

    Traffic fatalities kill more people each year than all cancer deaths. We are really worried about curing cancer but give little attention to all those traffic fatalities because ... what can you do, that's life.
    https://www.cdc.gov/injury/wisqars/animated-leading-causes.html [cdc.gov]
    After self driving vehicles come one-line en mass the statistics will show it is human drivers causing all the accidents we will know how to reduce traffic fatalities and human drivers will be banned.

    • (Score: 2) by Runaway1956 on Monday November 29 2021, @03:04PM

      by Runaway1956 (2926) Subscriber Badge on Monday November 29 2021, @03:04PM (#1200529) Journal

      Maybe true, but it's a long way off. Give it twenty years, and maybe.

    • (Score: 0) by Anonymous Coward on Monday November 29 2021, @03:21PM

      by Anonymous Coward on Monday November 29 2021, @03:21PM (#1200535)

      We turned the entire world upside down to deal with a virus that kills less than either of those things.

    • (Score: 3, Informative) by khallow on Monday November 29 2021, @03:48PM (3 children)

      by khallow (3766) Subscriber Badge on Monday November 29 2021, @03:48PM (#1200555) Journal

      Traffic fatalities kill more people each year than all cancer deaths.

      You miss the important detail "for Ages 1-44". When you include old people, you find that cancer (~600k [nih.gov] in 2020) kills about 15 times as much as traffic fatalities (38k [nhtsa.gov]) in the US in 2020.

      • (Score: 0) by Anonymous Coward on Monday November 29 2021, @07:04PM (2 children)

        by Anonymous Coward on Monday November 29 2021, @07:04PM (#1200632)

        It's good to have an age cutoff because otherwise you're recording deaths from what is basically 'old age'

        • (Score: 0) by Anonymous Coward on Monday November 29 2021, @08:31PM

          by Anonymous Coward on Monday November 29 2021, @08:31PM (#1200676)

          Not sure you can say a 45 year old has died from old age. Or a 65 year old, for that matter. At 85, ok. But cancer starts to become a real risk in the 40s, and a really serious one in the 60s.

        • (Score: 1) by khallow on Monday November 29 2021, @09:36PM

          by khallow (3766) Subscriber Badge on Monday November 29 2021, @09:36PM (#1200705) Journal
          Eliminate deaths from cancer, heart disease, and a few other causes, and people would be dying from considerably older age. Old age isn't an inevitable thing that we will forever need to accept.
    • (Score: 0) by Anonymous Coward on Monday November 29 2021, @06:09PM

      by Anonymous Coward on Monday November 29 2021, @06:09PM (#1200618)

      Modded troll because you omitted the 1-44 caveat; it might have been by accident, in which case sorry.

    • (Score: 1) by khallow on Tuesday November 30 2021, @12:30AM

      by khallow (3766) Subscriber Badge on Tuesday November 30 2021, @12:30AM (#1200746) Journal

      After self driving vehicles come one-line en mass the statistics will show it is human drivers causing all the accidents we will know how to reduce traffic fatalities and human drivers will be banned.

      Maybe. But I think it'll only come as a result of a much lower tolerance for such risk than at present. At the least, I'd expect a significantly longer human lifespan to be required.

  • (Score: 0) by Anonymous Coward on Monday November 29 2021, @03:28PM (1 child)

    by Anonymous Coward on Monday November 29 2021, @03:28PM (#1200544)

    oh god, i hate these "ethical dilemma" scenarios.
    how about this "if we implement total a.i. cars only worldwide, one and only one human life will be save in comparison to keeping human drivers in charge also. should we hand over control?"
    see i can come up with bullshit constrains too.

    as for the scenario presented, the a.i. will do a nanosecond database background check to see, who of the potential victimes is older, has more debt (more likely to be killed, mu-hahaha, inflation is bad), cancer in family history and maybe criminal background ... in short a a.i. having to decide who to kill given no other options can prolly make alot more calculations then a human, ehh?

    • (Score: 0) by Anonymous Coward on Monday November 29 2021, @04:00PM

      by Anonymous Coward on Monday November 29 2021, @04:00PM (#1200559)

      ofc we're gonna see a rise of blackmarket modification, where, say a A.I. trollybus operator will seek a mod where, when the trolly is forced to make such a descission, to speed-up to, so as to be sure that they have a guaranteed one-time severance "victim" and not a "forever payment" invalid, maimed ... patient.

  • (Score: 2, Interesting) by UncleBen on Monday November 29 2021, @03:38PM (4 children)

    by UncleBen (8563) on Monday November 29 2021, @03:38PM (#1200549)

    First off, they're not "accidents," they're "crashes."
    Secondly, the current standard of investigation starts from there and uses the rule that they're always caused by someone going too fast for conditions. The investigation works to find out who. (And detailing the conditions. Age, I can tell ya, is a "condition.")

    With those in mind, a meshed traffic stream will be damned hard to "crash." Even a sparse traffic mesh will have each member imbued with a really recent state of the road. The autonomous car--as part of a hive-mind--will have bigly insights. A suicidal pedestrian walking onto an interstate might find the flow moves around them like a stream around waders. This is all in the future of 100% automation, naturally. The transitional period will be...exciting. I, for one, can't wait for our automated automotive overlords to take over.

    The "Trolley Problem" is an interesting edge-case that must be analyzed and planned for (see other comments about lawsuits) but it also remains a sloganeered piece of news bullshit that too much civilian lifespan is being wasted on while experts try to move the actual boundaries of progress forward. I suspect they've gone so far past this little game that we wouldn't understand the language they're using.

    Let it go, let it go, let it gooooo....

    • (Score: 3, Insightful) by PiMuNu on Monday November 29 2021, @06:14PM

      by PiMuNu (3823) on Monday November 29 2021, @06:14PM (#1200620)

      > a meshed traffic stream

      Nb: This is the only scenario that I can ever see autonomous vehicles working.

      As someone who does multivariate analysis for a living, I think that the current system of individual cars trying to guess road conditions based on crappy data analysis routines is *never* going to work. The number of scenarios that a car driver has to deal with is unmanageable for a neural network. Without controlling the inputs in some manageable way it just doesn't work.

    • (Score: 4, Insightful) by Thexalon on Monday November 29 2021, @07:39PM

      by Thexalon (636) on Monday November 29 2021, @07:39PM (#1200648)

      But ... if we start calling them "crashes" again, we won't be able to comfortably ignore how common they are right now with human-driven vehicles.

      Secondly, the current standard of investigation starts from there and uses the rule that they're always caused by someone going too fast for conditions. The investigation works to find out who. (And detailing the conditions. Age, I can tell ya, is a "condition.")

      The current standard of investigation is an absolute joke, by every stretch of the imagination. For instance:
      - In a rear-end collision, it is the car in the back that is at fault. Even if the car in front of them pulled into their lane with far too little space and then slammed on the brakes. Insurance scammers definitely take advantage of this.
      - If one of the cars is driven by somebody who is broke and uninsured, then the insurance of the other vehicle(s) end up paying the cost, even if the broke and uninsured person is at fault, so there's basically no investigation of the cause. The broke and uninsured person will probably lose their license for a while and have to pay some fines, but that will probably be far less costly to them than the damage to your vehicle.
      - If a driver who was definitely at fault creates enough of a headache and lies loudly and persistently enough, odds are decent that they'll convince any responding police officer that letting them off the hook is less of a hassle than their other options.
      - If it wasn't serious enough that the police or insurance companies got involved, odds are basically 100% that there will be no investigation at all, and no consequences for anybody involved other than having to fix the damage.

      The situation I'll always remember: I grew up living on a corner between streets that weren't even remotely busy most of the time. One spring, there were suddenly multiple crashes all involving cars coming into the intersection from the same direction without stopping despite a stop sign. Not a single cop responding to any of the accidents noticed or cared about this pattern, so I, approximately aged 15, decided to look at what the drivers were seeing coming from that direction, and discovered that the stop sign had become invisible due to tree growth. We explained it to our neighbor, who trimmed back the trees, and the crashes stopped. But if we're relying on teenagers to even take notice of this sort of problem, we're doing a lousy job of this.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 2) by DrkShadow on Monday November 29 2021, @10:53PM

      by DrkShadow (1404) on Monday November 29 2021, @10:53PM (#1200725)

      The autonomous car--as part of a hive-mind--

      Having recently come from an autonomous vehicle manufacturer, this will not happen. "But what if someone sends false data?! That could cause a crash!" The vehicles will not trust anything but what their own sensors tell them. No smart infrastructure. No hive mind. No mesh network. No communications.

      Because liability. Because one in 100 million risk. Perhaps someone breached the smart-infrastructure (road construction sign: "Zombies Ahead", "Bridge Out", two directions have green lights). Perhaps a car was mis-re-programmed by a tinkerer. The manufacturers can't be sure, so trust only the car's sensors, and if in doubt -- stop and await instructions.

    • (Score: 3, Insightful) by Entropy on Tuesday November 30 2021, @02:27AM

      by Entropy (4228) on Tuesday November 30 2021, @02:27AM (#1200762)

      So if someone is blackout drunk they are going too fast for conditions? If they are texting and not looking at the road at all they are going too fast for conditions? If their car isn't repaired well and the tires fall off? I suppose one could argue that 0 is the proper speed for conditions...but more realistically there are plenty of other reasons for crashes other than speed...unless you use wacky justifications to make it all about speed.

  • (Score: 0) by Anonymous Coward on Monday November 29 2021, @10:51PM (2 children)

    by Anonymous Coward on Monday November 29 2021, @10:51PM (#1200724)

    The problem is written as if this is a problem for a self-driving car, when it is in fact a problem for a software developer building a self driving car. And it assumes that software developers think like philosophers, which they do not. Software developers frame the problem differently. As a software developer myself, here is how I would think about this problem.
    1. Determine if it is a real problem.
          How likely is this scenario to actually occur? Expend development resources accordingly.
    2. Address the entire chain of causation.
          The philosopher wants to decide who to kill, if everything goes wrong. The software developer wants to break every link in the chain of causation.
          - Can brake failure be minimized?
              By buying better brakes?
              By maintaining the brakes more frequently?
              By compensating for unusual environmental conditions that cause corrosion of the brakes?
          - Can brake failure be detected in advance?
              If the computer detect that the brakes are behaving strangely, it can take itself off the road before the brakes fail spectacularly.
          - Is the vehicle driving defensively enough?
              If brake failure is common enough to worry about, the vehicle should allow itself more headway.

    If I had a software developer working for me, and they were trying to solve a safety problem by deciding who should die, I would fire them.

    • (Score: 0) by Anonymous Coward on Tuesday November 30 2021, @12:53PM

      by Anonymous Coward on Tuesday November 30 2021, @12:53PM (#1200831)

      "no dave, i cannot do that"

    • (Score: 0) by Anonymous Coward on Wednesday December 01 2021, @09:44AM

      by Anonymous Coward on Wednesday December 01 2021, @09:44AM (#1201121)

      "No Plan survives first contact with the enemy." And in the case of autonomous cars humans are the enemy, just like anything that comes into contact with humans. We are fallible without set algorithms for determining behavior. Some people see their brake light come on and don't drive the car until they can take it to a mechanic first; some people see their brake light come on and drive for another 3000 miles until the squealing gets too bad.

      When you are designing a system, you have to plan for every contingency because sometimes the holes in the Swiss cheese [wikipedia.org] line up just right. With over 1.5 billion vehicles on the road driven trillions of both hours and miles a year, failure is inevitable no matter how many layers you put into place to prevent it.

  • (Score: 3, Interesting) by srobert on Tuesday November 30 2021, @01:47AM

    by srobert (4803) on Tuesday November 30 2021, @01:47AM (#1200758)

    Imagine a future with self-driving cars that are fully autonomous. If everything works as intended, the morning commute ...

    Stop there. If the cars have the ability to drive themselves, precisely where do you think you're going in the morning? If we've replaced drivers with AI, then AI can probably do your job too.

(1)