Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 22 2019, @12:21PM   Printer-friendly
from the Marvin dept.

No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?

Nervously awaiting learned opinions,
VT


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:55PM (17 children)

    by Anonymous Coward on Wednesday May 22 2019, @01:55PM (#846195)

    Tesla?
    Also... I got the invalid key thingy again. It seems to always happen when two people try to post at the same time.

  • (Score: 2) by The Mighty Buzzard on Wednesday May 22 2019, @02:29PM (16 children)

    Yeah, self-driving cars should be an absolute non-starter for anyone. Especially anyone who deals with tech regularly. It's not that it may fuck up and get someone killed. It absolutely will fuck up and get someone killed. How many times have Microsoft, Apple, and Google royally screwed the pooch on an update? How many times have AMD and Intel fucked up just as bad in hardware? Morons who buy cars with similar features had really better hope that when it does fuck up badly it's an edge case and not a bug that affects every instance of every model.

    --
    My rights don't end where your fear begins.
    • (Score: 4, Insightful) by Immerman on Wednesday May 22 2019, @02:52PM (15 children)

      by Immerman (3985) on Wednesday May 22 2019, @02:52PM (#846230)

      >It absolutely will fuck up and get someone killed.

      Absolutely. However - so will human drivers. Especially the masses of sleep-deprived, child-distracted, drug-addled, phone-using, or otherwise skill-compromised drivers currently plaguing the roads. AI doesn't have to be perfect to be valuable, it just has to be better than them.

      Eventually AI drivers will almost certainly get good enough that they will statistically be much safer than human drivers. At that point it becomes a personal decision - would you rather be objectively safer, with more free time on your hands, or more assured of not dying in a bone-headed AI error that will make for a really stupid-sounding obituary?

      Now, if you don't personally want to trust an AI driver I certainly can't blame you, there's something to be said for at least dying under your own power, and I don't think we should be in any hurry to take away to option of manual driving. But I only have to pay attention to traffic to spot a whole lot of people that I'd rather have being chauffeured by a competent AI, for their safety and my own.

      • (Score: 5, Informative) by The Mighty Buzzard on Wednesday May 22 2019, @03:23PM (14 children)

        Incorrect. Statistically safer could mean that it intentionally drives one in every hundred thousand cars into oncoming traffic while otherwise doing fine. That is not acceptable. AI does have to be perfect because it can be, while humans can not. Releasing known imperfect code to be used in life or death situations is gross negligence and will get the everloving shit sued out of you. And you will lose because you absolutely will be at fault. Humans at fault in an auto accident fatality can go to prison for causing it. Who do you suggest we jail in AI-driven fatality crashes? Or should we just shrug and say "Oh well, it's still statistically safer," when it's our loved one who gets plowed over because the AI didn't recognize them as human?

        Anyone accepting AI at the wheel instead of themselves is looking to ditch authority of their own life in exchange for comfort and a lessening of responsibility. Without even demanding someone else that we trust assume said responsibility. That is beyond foolish, it is contemptible.

        --
        My rights don't end where your fear begins.
        • (Score: 2) by Runaway1956 on Wednesday May 22 2019, @03:40PM (2 children)

          by Runaway1956 (2926) Subscriber Badge on Wednesday May 22 2019, @03:40PM (#846261) Journal

          That is not acceptable.

          I'm afraid I have to disagree with you on that. At some point, the Potabians are going to decide that computers kill fewer people than people kill people. The mandate will come down, "People cannot operate vehicles anymore!" Even more important, the computer operated vehicles will cause less property damage, and save the insurance companies lots of money.

          Millenials are probably the next to the last generation that will operate privately owned vehicles.

          • (Score: 2) by The Mighty Buzzard on Wednesday May 22 2019, @03:54PM (1 child)

            Grouchy old bastards who can tell their ass from a hole in the ground aren't all born that way. Most of us have to learn to be the hard way over many decades. Millennials will learn it as well, even if they are dumber than we ever were.

            --
            My rights don't end where your fear begins.
            • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @05:35PM

              by Anonymous Coward on Wednesday May 22 2019, @05:35PM (#846330)

              "even if they are dumber than we ever were."

              I guess we're going extinct then, womp womp

        • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 22 2019, @04:06PM (7 children)

          by Anonymous Coward on Wednesday May 22 2019, @04:06PM (#846276)

          AI does have to be perfect because it can be

          No, it can't. There is no such thing as a bug free program.

          • (Score: 2, Disagree) by The Mighty Buzzard on Wednesday May 22 2019, @04:21PM (6 children)

            Incorrect. Code absolutely can be bug free. Code is math and math can be proven to be without flaw. We just normally don't bother because it takes a lot of time and money.

            --
            My rights don't end where your fear begins.
            • (Score: 2, Insightful) by Anonymous Coward on Wednesday May 22 2019, @06:26PM

              by Anonymous Coward on Wednesday May 22 2019, @06:26PM (#846345)

              Nobody can predict all possible reasonably expectable inputs for an autonomous vehicle. Proving its outputs correct isn't even a quesion.

              Unless you have a planet-sized supercomputer hiding in your tesseract. You aren't holding out on us, are you?

            • (Score: 4, Insightful) by Immerman on Wednesday May 22 2019, @07:10PM (1 child)

              by Immerman (3985) on Wednesday May 22 2019, @07:10PM (#846356)

              Technically true - but it's all but impossible to formally prove correctness in anything much more complicated than "Hello World". Add in the necessity of dealing with real-world inputs from a chaotic, imperfect operating environment, and you've got no chance at all of achieving perfection, much less proving it.

              And things get far worse when we start talking about neural networks and other "grown" AI - the entire point of training a neural network is that we don't know how to accomplish the same thing algorithmically. The behavior of individual pseudoneurons may be provable, but all the really useful behavior is emerging from the network behavior, where our understanding still lags far behind our achievements.

            • (Score: 2) by maxwell demon on Wednesday May 22 2019, @08:22PM

              by maxwell demon (1608) on Wednesday May 22 2019, @08:22PM (#846366) Journal

              Code is math and math can be proven to be without flaw.

              Wrong. We can't even prove the consistency of the Peano axioms (i.e. natural number arithmetic). See Gödel's incompleteness theorems.

              --
              The Tao of math: The numbers you can count are not the real numbers.
            • (Score: 2) by DeVilla on Friday May 24 2019, @04:40AM (1 child)

              by DeVilla (5354) on Friday May 24 2019, @04:40AM (#846951)

              Thing is, it's not code. It's "Machine Learning". It's trained from some input data set. If you need to change the behavior (because a local municipality decided to use flashing yellow arrows instead of a solid green circle) you can't just tell it the new rule or expect it to read the sign next to the light. Teaching it is like teaching a horse. It learns by screwing up and being corrected.

        • (Score: 4, Interesting) by Immerman on Wednesday May 22 2019, @08:33PM

          by Immerman (3985) on Wednesday May 22 2019, @08:33PM (#846370)

          Take a step back for a moment if you will, and lets ask a more fundamental question: How do you feel about letting other people drive? A ride with a friend, a taxi, bus, airplane - anything where someone else is ultimately in control of piloting the vehicle?

          Do you object to a loyal human chauffeur as strenuously as you do to an AI driver?

          Because I think that's the starting point to look at it from - would you like to have someone else do the driving for you? Done right it's a stressful, time-consuming job, and most people are nowhere near as good as a professional driver.
          If not, then obviously an AI driver will be no better.
          But if you are okay with having a driver, then the proper comparison is not "myself or an AI" it's "a professional driver or an AI". And if the AI is (as it will almost certainly *eventually* become) a better, safer driver in every measurable way, then it becomes an aesthetic choice - do you pick the objectively more dangerous driver because they offer "the human touch"? If I was doing something unusual, more likely to confuse the AI, I'd strongly prefer that. But just getting from A to B through normal traffic and normal traffic problems? As long as the AI has the track-record of proven reliability behind it, I see no reason to distrust it.

          Other than security - the destructive potential of a software driving system that can be remotely corrupted should not be understated. Not that I'm likely to be targeted personally, but painting an internet-facing bullseye on an inherently dangerous vehicle is just asking for trouble.

          As for who's liable in the case of an AI malfunction that kills someone? Seems a pretty clear-cut case of manufacturer liability for a product defect to me, and from what little I've heard the car companies mostly agree. As usual when someone rich is at fault, nobody goes to jail - but they've got deep enough pockets that generous payouts are reasonable to expect, if only for the PR impact. The current Tesla,etc. stories muddy the water precisely because Autopilot is *not* self-driving, only a driving assist system, and thus since it explicitly requires you in the loop as safety overseer, it can be reasonably argued that you are ultimately responsible for safe behavior. One of the many reasons I think "almost self-driving" systems should not be allowed. But, once a system is warranted to be able to operate fully autonomously, with no oversight whatsoever, then liability for its actions when used for the purpose it was marketed resides squarely with the manufacturer.

        • (Score: 2) by Joe Desertrat on Wednesday May 22 2019, @10:32PM (1 child)

          by Joe Desertrat (2454) on Wednesday May 22 2019, @10:32PM (#846404)

          I think the biggest problem stems from mixing human and AI drivers. Humans do crazy things on the roads (humans do crazy things period). There is no way to code for an AI to handle all the possibilities. Presumably, if AI drivers are the vast majority, they will be able to communicate with each other their intentions at any given moment, something humans seem to struggle to do. They should also be communicating hazards, should one AI driver run into something like a pothole, or deer crossing the road, etc., that warning should go out to all. There also has to be some sort of fail safe built in, if for instance a satellite glitch tells all AI drivers to turn right off the road over a cliff, there has to be some situational awareness built in that overrides those instructions and prevents that from happening. We should probably, initially at least, still require a passenger with a driver's license in order that, should some level of operational parameters for the AI be exceeded, they can be alerted to take over control of the vehicle.

          • (Score: 2) by Immerman on Thursday May 23 2019, @01:40AM

            by Immerman (3985) on Thursday May 23 2019, @01:40AM (#846473)

            >We should probably, initially at least, still require a passenger with a driver's license in order that, should some level of operational parameters for the AI be exceeded, they can be alerted to take over control of the vehicle.

            If they have to do that while the car is moving, I think the systems will be irresponsibly dangerous. It's completely unreasonable to expect a thoroughly distracted passenger to start paying attention to the road, get their bearings in a difficult situation, and then take over driving faster than the vehicle can bring itself to a safe stop.

            Now, if instead of alerting the passenger to take over while driving, it can come to a safe stop, give the passenger time to wake up and figure out what's going on, and then drive themselves away, I'm totally on board. An AI doesn't need to be able to handle anything that comes at it, so long as it recognizes when it has a problem and can avoid making it worse until a human can decide what to do at their leisure. Perfectly reasonable to require a backup driver be available for the rare unexpected problem, so long as they don't have to actually be on duty through countless hours of problem-free driving.