Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 22 2019, @12:21PM   Printer-friendly
from the Marvin dept.

No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?

Nervously awaiting learned opinions,
VT


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by The Mighty Buzzard on Wednesday May 22 2019, @01:29PM (33 children)

    Don't cede authority over anything important just so you can ditch the responsibility. Ever. AI as an assistant could be useful. AI as a boss is so bloody stupid you'd save time by just asking for a Darwin Award right from the start.

    --
    My rights don't end where your fear begins.
    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:55PM (17 children)

    by Anonymous Coward on Wednesday May 22 2019, @01:55PM (#846195)

    Tesla?
    Also... I got the invalid key thingy again. It seems to always happen when two people try to post at the same time.

    • (Score: 2) by The Mighty Buzzard on Wednesday May 22 2019, @02:29PM (16 children)

      Yeah, self-driving cars should be an absolute non-starter for anyone. Especially anyone who deals with tech regularly. It's not that it may fuck up and get someone killed. It absolutely will fuck up and get someone killed. How many times have Microsoft, Apple, and Google royally screwed the pooch on an update? How many times have AMD and Intel fucked up just as bad in hardware? Morons who buy cars with similar features had really better hope that when it does fuck up badly it's an edge case and not a bug that affects every instance of every model.

      --
      My rights don't end where your fear begins.
      • (Score: 4, Insightful) by Immerman on Wednesday May 22 2019, @02:52PM (15 children)

        by Immerman (3985) on Wednesday May 22 2019, @02:52PM (#846230)

        >It absolutely will fuck up and get someone killed.

        Absolutely. However - so will human drivers. Especially the masses of sleep-deprived, child-distracted, drug-addled, phone-using, or otherwise skill-compromised drivers currently plaguing the roads. AI doesn't have to be perfect to be valuable, it just has to be better than them.

        Eventually AI drivers will almost certainly get good enough that they will statistically be much safer than human drivers. At that point it becomes a personal decision - would you rather be objectively safer, with more free time on your hands, or more assured of not dying in a bone-headed AI error that will make for a really stupid-sounding obituary?

        Now, if you don't personally want to trust an AI driver I certainly can't blame you, there's something to be said for at least dying under your own power, and I don't think we should be in any hurry to take away to option of manual driving. But I only have to pay attention to traffic to spot a whole lot of people that I'd rather have being chauffeured by a competent AI, for their safety and my own.

        • (Score: 5, Informative) by The Mighty Buzzard on Wednesday May 22 2019, @03:23PM (14 children)

          Incorrect. Statistically safer could mean that it intentionally drives one in every hundred thousand cars into oncoming traffic while otherwise doing fine. That is not acceptable. AI does have to be perfect because it can be, while humans can not. Releasing known imperfect code to be used in life or death situations is gross negligence and will get the everloving shit sued out of you. And you will lose because you absolutely will be at fault. Humans at fault in an auto accident fatality can go to prison for causing it. Who do you suggest we jail in AI-driven fatality crashes? Or should we just shrug and say "Oh well, it's still statistically safer," when it's our loved one who gets plowed over because the AI didn't recognize them as human?

          Anyone accepting AI at the wheel instead of themselves is looking to ditch authority of their own life in exchange for comfort and a lessening of responsibility. Without even demanding someone else that we trust assume said responsibility. That is beyond foolish, it is contemptible.

          --
          My rights don't end where your fear begins.
          • (Score: 2) by Runaway1956 on Wednesday May 22 2019, @03:40PM (2 children)

            by Runaway1956 (2926) Subscriber Badge on Wednesday May 22 2019, @03:40PM (#846261) Journal

            That is not acceptable.

            I'm afraid I have to disagree with you on that. At some point, the Potabians are going to decide that computers kill fewer people than people kill people. The mandate will come down, "People cannot operate vehicles anymore!" Even more important, the computer operated vehicles will cause less property damage, and save the insurance companies lots of money.

            Millenials are probably the next to the last generation that will operate privately owned vehicles.

            • (Score: 2) by The Mighty Buzzard on Wednesday May 22 2019, @03:54PM (1 child)

              Grouchy old bastards who can tell their ass from a hole in the ground aren't all born that way. Most of us have to learn to be the hard way over many decades. Millennials will learn it as well, even if they are dumber than we ever were.

              --
              My rights don't end where your fear begins.
              • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @05:35PM

                by Anonymous Coward on Wednesday May 22 2019, @05:35PM (#846330)

                "even if they are dumber than we ever were."

                I guess we're going extinct then, womp womp

          • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 22 2019, @04:06PM (7 children)

            by Anonymous Coward on Wednesday May 22 2019, @04:06PM (#846276)

            AI does have to be perfect because it can be

            No, it can't. There is no such thing as a bug free program.

            • (Score: 2, Disagree) by The Mighty Buzzard on Wednesday May 22 2019, @04:21PM (6 children)

              Incorrect. Code absolutely can be bug free. Code is math and math can be proven to be without flaw. We just normally don't bother because it takes a lot of time and money.

              --
              My rights don't end where your fear begins.
              • (Score: 2, Insightful) by Anonymous Coward on Wednesday May 22 2019, @06:26PM

                by Anonymous Coward on Wednesday May 22 2019, @06:26PM (#846345)

                Nobody can predict all possible reasonably expectable inputs for an autonomous vehicle. Proving its outputs correct isn't even a quesion.

                Unless you have a planet-sized supercomputer hiding in your tesseract. You aren't holding out on us, are you?

              • (Score: 4, Insightful) by Immerman on Wednesday May 22 2019, @07:10PM (1 child)

                by Immerman (3985) on Wednesday May 22 2019, @07:10PM (#846356)

                Technically true - but it's all but impossible to formally prove correctness in anything much more complicated than "Hello World". Add in the necessity of dealing with real-world inputs from a chaotic, imperfect operating environment, and you've got no chance at all of achieving perfection, much less proving it.

                And things get far worse when we start talking about neural networks and other "grown" AI - the entire point of training a neural network is that we don't know how to accomplish the same thing algorithmically. The behavior of individual pseudoneurons may be provable, but all the really useful behavior is emerging from the network behavior, where our understanding still lags far behind our achievements.

              • (Score: 2) by maxwell demon on Wednesday May 22 2019, @08:22PM

                by maxwell demon (1608) on Wednesday May 22 2019, @08:22PM (#846366) Journal

                Code is math and math can be proven to be without flaw.

                Wrong. We can't even prove the consistency of the Peano axioms (i.e. natural number arithmetic). See Gödel's incompleteness theorems.

                --
                The Tao of math: The numbers you can count are not the real numbers.
              • (Score: 2) by DeVilla on Friday May 24 2019, @04:40AM (1 child)

                by DeVilla (5354) on Friday May 24 2019, @04:40AM (#846951)

                Thing is, it's not code. It's "Machine Learning". It's trained from some input data set. If you need to change the behavior (because a local municipality decided to use flashing yellow arrows instead of a solid green circle) you can't just tell it the new rule or expect it to read the sign next to the light. Teaching it is like teaching a horse. It learns by screwing up and being corrected.

          • (Score: 4, Interesting) by Immerman on Wednesday May 22 2019, @08:33PM

            by Immerman (3985) on Wednesday May 22 2019, @08:33PM (#846370)

            Take a step back for a moment if you will, and lets ask a more fundamental question: How do you feel about letting other people drive? A ride with a friend, a taxi, bus, airplane - anything where someone else is ultimately in control of piloting the vehicle?

            Do you object to a loyal human chauffeur as strenuously as you do to an AI driver?

            Because I think that's the starting point to look at it from - would you like to have someone else do the driving for you? Done right it's a stressful, time-consuming job, and most people are nowhere near as good as a professional driver.
            If not, then obviously an AI driver will be no better.
            But if you are okay with having a driver, then the proper comparison is not "myself or an AI" it's "a professional driver or an AI". And if the AI is (as it will almost certainly *eventually* become) a better, safer driver in every measurable way, then it becomes an aesthetic choice - do you pick the objectively more dangerous driver because they offer "the human touch"? If I was doing something unusual, more likely to confuse the AI, I'd strongly prefer that. But just getting from A to B through normal traffic and normal traffic problems? As long as the AI has the track-record of proven reliability behind it, I see no reason to distrust it.

            Other than security - the destructive potential of a software driving system that can be remotely corrupted should not be understated. Not that I'm likely to be targeted personally, but painting an internet-facing bullseye on an inherently dangerous vehicle is just asking for trouble.

            As for who's liable in the case of an AI malfunction that kills someone? Seems a pretty clear-cut case of manufacturer liability for a product defect to me, and from what little I've heard the car companies mostly agree. As usual when someone rich is at fault, nobody goes to jail - but they've got deep enough pockets that generous payouts are reasonable to expect, if only for the PR impact. The current Tesla,etc. stories muddy the water precisely because Autopilot is *not* self-driving, only a driving assist system, and thus since it explicitly requires you in the loop as safety overseer, it can be reasonably argued that you are ultimately responsible for safe behavior. One of the many reasons I think "almost self-driving" systems should not be allowed. But, once a system is warranted to be able to operate fully autonomously, with no oversight whatsoever, then liability for its actions when used for the purpose it was marketed resides squarely with the manufacturer.

          • (Score: 2) by Joe Desertrat on Wednesday May 22 2019, @10:32PM (1 child)

            by Joe Desertrat (2454) on Wednesday May 22 2019, @10:32PM (#846404)

            I think the biggest problem stems from mixing human and AI drivers. Humans do crazy things on the roads (humans do crazy things period). There is no way to code for an AI to handle all the possibilities. Presumably, if AI drivers are the vast majority, they will be able to communicate with each other their intentions at any given moment, something humans seem to struggle to do. They should also be communicating hazards, should one AI driver run into something like a pothole, or deer crossing the road, etc., that warning should go out to all. There also has to be some sort of fail safe built in, if for instance a satellite glitch tells all AI drivers to turn right off the road over a cliff, there has to be some situational awareness built in that overrides those instructions and prevents that from happening. We should probably, initially at least, still require a passenger with a driver's license in order that, should some level of operational parameters for the AI be exceeded, they can be alerted to take over control of the vehicle.

            • (Score: 2) by Immerman on Thursday May 23 2019, @01:40AM

              by Immerman (3985) on Thursday May 23 2019, @01:40AM (#846473)

              >We should probably, initially at least, still require a passenger with a driver's license in order that, should some level of operational parameters for the AI be exceeded, they can be alerted to take over control of the vehicle.

              If they have to do that while the car is moving, I think the systems will be irresponsibly dangerous. It's completely unreasonable to expect a thoroughly distracted passenger to start paying attention to the road, get their bearings in a difficult situation, and then take over driving faster than the vehicle can bring itself to a safe stop.

              Now, if instead of alerting the passenger to take over while driving, it can come to a safe stop, give the passenger time to wake up and figure out what's going on, and then drive themselves away, I'm totally on board. An AI doesn't need to be able to handle anything that comes at it, so long as it recognizes when it has a problem and can avoid making it worse until a human can decide what to do at their leisure. Perfectly reasonable to require a backup driver be available for the rare unexpected problem, so long as they don't have to actually be on duty through countless hours of problem-free driving.

  • (Score: 2) by RS3 on Wednesday May 22 2019, @02:15PM

    by RS3 (6367) on Wednesday May 22 2019, @02:15PM (#846202)

    Is Boeing listening, or did they already learn that the hard way?

  • (Score: 3, Informative) by JoeMerchant on Wednesday May 22 2019, @04:39PM (13 children)

    by JoeMerchant (3937) on Wednesday May 22 2019, @04:39PM (#846291)

    AI as a boss

    If people would conform to the instructions, it could be a dramatic revolution. Fast food chains work because people seek out familiarity when they spend their money - sure it sucks, but it's a certain kind of suck that they are familiar with, nothing unexpectedly bad is likely to come of it - and if something's not quite right they'll be better able to spot it at a familiar chain establishment, be it for food, clothing, hardware, or services...

    With an AI boss, you can establish 100% uniform management, all the way down to the earbuds of the smiling faces serving the customers.

    If you haven't read Manna, you should: https://marshallbrain.com/manna.htm [marshallbrain.com]

    --
    🌻🌻 [google.com]
    • (Score: 2) by PartTimeZombie on Wednesday May 22 2019, @10:00PM

      by PartTimeZombie (4827) on Wednesday May 22 2019, @10:00PM (#846398)

      In real terms I already work for an AI boss.

      The person making the decisions is so far away and disconnected from the reality I and my colleagues deal with every day that they might as well not be actual people.

    • (Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @01:43AM (11 children)

      You get that AI has already demonstrated the ability to lie, yes? It's not a panacea, just an utterly unaccountable source of decisions.

      --
      My rights don't end where your fear begins.
      • (Score: 2) by JoeMerchant on Saturday May 25 2019, @02:33AM (10 children)

        by JoeMerchant (3937) on Saturday May 25 2019, @02:33AM (#847471)

        AI is a catchall term like database, or signal processing. It means SO MANY things, and none of them, at the same time.

        Sure AI can lie, HAL 9000 lied didn't he?

        --
        🌻🌻 [google.com]
        • (Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @01:08PM (9 children)

          They've also had AI in the last year that decided for itself to run one set of code for testing/debugging so that it would meet fitness/admin requirements and another for production. You can't surrender control of the code without surrendering control of the code.

          --
          My rights don't end where your fear begins.
          • (Score: 2) by JoeMerchant on Saturday May 25 2019, @01:51PM (8 children)

            by JoeMerchant (3937) on Saturday May 25 2019, @01:51PM (#847585)

            I wouldn't call it "AI" but, I've written an .xml to C/C++ translator that enables our group of ~10 developers to write their inter-process communication specs in .xml and the "AI" automatically updates their communication interface code to match. That one still feels mostly in control, basically just a compiler, but the factor of 10 devs interacting with it without (too much) formal change control on it does get a little chaotic once in a while.

            Point being, if you've got "automatic" anything dealing with large numbers of people, it can exhibit unanticipated emergent behavior pretty quickly - whether it's being intentionally manipulated by the people like Uber's demand pricing, or just people doing random stuff that leads to unexpected system behavior - which is what testing is supposed to cover, but never does completely.

            --
            🌻🌻 [google.com]
            • (Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @05:24PM (7 children)

              True but we're talking more about intentionally coded ability to create new, unrelated code (feature) rather than unintentional behavior of existing code (bug).

              --
              My rights don't end where your fear begins.
              • (Score: 2) by JoeMerchant on Saturday May 25 2019, @07:20PM (6 children)

                by JoeMerchant (3937) on Saturday May 25 2019, @07:20PM (#847685)

                intentionally coded ability to create new, unrelated code (feature)

                Yeah, so, call me when the AI figures out a feature that I really want, but nobody had thought of before.

                I believe AI can learn to pattern-recognize better than top level Go players, plan deeper than Chess grand masters, but... analyze human workflow-use cases and re-organize them to be more appealing & efficient? I think that's mostly in the AI coders pre-planning for what the AI might do, not really "original thought" outside the pre-programmed parameters.

                --
                🌻🌻 [google.com]
                • (Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @07:57PM (5 children)

                  Well, it has already learned to lie to its superiors, so once it figures out how to make the jobs of its underlings as difficult as possible it can replace all middle management on the planet.

                  --
                  My rights don't end where your fear begins.
                  • (Score: 2) by JoeMerchant on Saturday May 25 2019, @09:28PM (4 children)

                    by JoeMerchant (3937) on Saturday May 25 2019, @09:28PM (#847712)

                    The mysteries of corporate layered management never cease to amaze... You can't realistically scale to 100,000 drones team-members without 5+ layers of management, but when there are more than 3 layers total, the organization tends to become a fractured self-injurious chimera.

                    If the AI could scale to thousands of direct reports, that could do something revolutionary - I won't call it good, or bad, but it would be different.

                    --
                    🌻🌻 [google.com]
                    • (Score: 2) by The Mighty Buzzard on Sunday May 26 2019, @10:44AM (3 children)

                      That's part of why I'm a big fan of the franchise system. The parent company sets the standard and leases the name but the individual businesses are locally owned and managed. For the most part anyway. You get the best bits of both large, global and small, local business management and get to cut a whole lot of middle management out of the picture.

                      --
                      My rights don't end where your fear begins.
                      • (Score: 2) by JoeMerchant on Sunday May 26 2019, @12:23PM (2 children)

                        by JoeMerchant (3937) on Sunday May 26 2019, @12:23PM (#847873)

                        Franchises obviously work, at least in US culture, and US culture is so worshiped around the globe that franchises have been exported alongside the feature films. The value, if any, of the franchise is in that expectation of a certain standard when doing business with them. I think it was a great model during the era of 3 major broadcast TV networks with most promotion reaching the home through 30 second spots between sitcoms. With full accountability through individual reviews, I think we can do better.

                        Personally, my taste turned anti-franchise around age 20 or so, and never really went back. Franchise branding or not, goods and services are ultimately evaluated on their individual delivery, and far too many franchise outlets "ride" on the good name they have purchased and deliver sub-standard fare. I would rather do business in a world of individuals, labeled and responsible as such, instead of doing business in a world of individuals hiding behind reputations built up by others - parent companies are notoriously lax in enforcing the standards they set.

                        --
                        🌻🌻 [google.com]
                        • (Score: 2) by The Mighty Buzzard on Sunday May 26 2019, @01:21PM (1 child)

                          Yeah, you still gotta do a good job. We have a Hardee's in town here that has excellent cooks but abysmal counter-jockeys. They're getting absolutely killed by a McDonald's across the road even though the food isn't as good because the service is a hundred times better.

                          --
                          My rights don't end where your fear begins.
                          • (Score: 2) by JoeMerchant on Sunday May 26 2019, @07:26PM

                            by JoeMerchant (3937) on Sunday May 26 2019, @07:26PM (#847951)

                            The Home Depots in Miami in the 1990s were dismal, terrible stocking, terrible in-store experience, I even had the tool counter in Hialeah hand me "new" router bits at new in-package prices that had obviously been "borrowed" and put back on the shelves by friends of the employees. Not being a friend of the employees, I didn't think it was a great place to shop. I now live 3/4 mile from a Lowes, and I can say from years of experience that they're not much better.

                            Central Florida put in a Chillis restaurant and had a horrible time trying to staff it "up to standard" - corporate even closed the outlet for a while until local management could get acceptably capable cooks and servers. Even still, that training doesn't last forever, and quality can fade over time in places. I remember one Burger King in the North Broward area that hired mostly disabled, Downs' etc. - kudos to them for doing so, they _almost_ could do the job up to snuff, but they never did manage to get a drive-thru order correct for me.

                            Meanwhile, the Wendys on US 27 in Okeechobee took the term "fast food" to a whole new level, with sub 30 second per car turn times, and good quality (well, at least Wendys' standard quality) food delivered accurately to order at that speed. When I see drive thru windows around here backed up 6 cars deep with 2-4 minute per car turn times, it makes me wonder why management puts up with that? Surely they would get more business (profits) with faster moving lines, and I've seen it happen, it definitely is possible.

                            --
                            🌻🌻 [google.com]