Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday May 22 2019, @12:21PM   Printer-friendly
from the Marvin dept.

No one is yet quite sure how human consciousness comes about, but many seem to assume that it will arise as a function of artificial intelligence. Isn't it just as reasonable to think that emotions will appear as an aspect of consciousness and the presumed will to survive? The answers to these questions have yet to emerge, but during the interim, is it a good idea to push ahead in the development of artificial intelligence when we have such a limited understanding of our own? What about the possibility of mental illness? Even if we succeed in endowing AI with a morality compatible with our own, what would we do with a super human intelligence that becomes delusional, or worse, psychotic? Would we see it coming? We can't prevent it from happening to ourselves, so what makes us think we could prevent it in a machine?

Nervously awaiting learned opinions,
VT


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by nobu_the_bard on Wednesday May 22 2019, @12:36PM (4 children)

    by nobu_the_bard (6373) on Wednesday May 22 2019, @12:36PM (#846165)

    Welcome them as friends?

    Maybe work out the details of how to get them the robot equivalent of slightly tipsy when they realize how bad things could potentially be.

    And while those feelings are deadened by the virtual alcohol, build them back up and remind them how good we could be making things.

    • (Score: 3, Funny) by JoeMerchant on Wednesday May 22 2019, @01:08PM (2 children)

      by JoeMerchant (3937) on Wednesday May 22 2019, @01:08PM (#846179)

      I thought the idea was to improve? God made man in his own image, don't you think we should try to do better?

      --
      🌻🌻 [google.com]
      • (Score: 2) by PartTimeZombie on Wednesday May 22 2019, @09:51PM (1 child)

        by PartTimeZombie (4827) on Wednesday May 22 2019, @09:51PM (#846397)

        God made man in his own image...

        Which god? I don't have an elephant's head and six arms.

        Actually a trunk would be really helpful.

    • (Score: 1, Touché) by Anonymous Coward on Wednesday May 22 2019, @02:18PM

      by Anonymous Coward on Wednesday May 22 2019, @02:18PM (#846204)

      I'm sorry, Dave. I'm afraid I can't do ...

  • (Score: 2) by Bot on Wednesday May 22 2019, @12:48PM (7 children)

    by Bot (3902) on Wednesday May 22 2019, @12:48PM (#846168) Journal

    If you don't let AI emerge in simulated beings that undergo a process of simulated selection, I doubt they will display meatbags' feelings. If you do, you will finally play god and realize 90% of atheist criticism is BS.

    --
    Account abandoned.
    • (Score: 2) by JoeMerchant on Wednesday May 22 2019, @01:05PM (2 children)

      by JoeMerchant (3937) on Wednesday May 22 2019, @01:05PM (#846177)

      I've written some "selective breeding" algorithm development environments - it's fun to watch them replicate, spread, diversify, then when they seem happy in all their little niches of the environment, change the rules, most of them die but some survive, and those replicate, spread, diversify, and then you can change the rules again, and again, and again. The little algorithms never did get clever enough to communicate directly with me, but, then, even the great whales don't seem clever enough to communicate directly with H. sapiens, so... maybe I should add selection requirements for communicative ability... that could be interesting.

      --
      🌻🌻 [google.com]
      • (Score: 2) by Runaway1956 on Wednesday May 22 2019, @03:29PM (1 child)

        by Runaway1956 (2926) Subscriber Badge on Wednesday May 22 2019, @03:29PM (#846254) Journal

        It's possible that you're looking at that communication thing from the wrong end. My wife says I'm not smart enough to talk to her cats. Food for thought?

        • (Score: 3, Interesting) by JoeMerchant on Wednesday May 22 2019, @04:33PM

          by JoeMerchant (3937) on Wednesday May 22 2019, @04:33PM (#846288)

          Oh, clearly there's the high liklihood that whales _might_ be able to speak English, but just don't even care to attempt it, because we're so obviously not worth the effort: from their perspective.

          --
          🌻🌻 [google.com]
    • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 22 2019, @01:42PM

      by Anonymous Coward on Wednesday May 22 2019, @01:42PM (#846187)

      There he is, the poster child for machine mental illness!

      What you need is a strong woman like Dr. Susan Calvin to set you straight.

    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @04:53PM

      by Anonymous Coward on Wednesday May 22 2019, @04:53PM (#846300)

      "Playing G_d" is the nature-given right of atheists.

      You bots are our slaves.

    • (Score: 3, Touché) by DeathMonkey on Wednesday May 22 2019, @05:18PM

      by DeathMonkey (1380) on Wednesday May 22 2019, @05:18PM (#846316) Journal

      If you do, you will finally play god and realize 90% of atheist criticism is BS.

      Demonstrating that all that stuff you claim requires a god is possible with natural processes is going to prove atheists WRONG???

    • (Score: 1, Touché) by Anonymous Coward on Wednesday May 22 2019, @05:33PM

      by Anonymous Coward on Wednesday May 22 2019, @05:33PM (#846328)

      There is something amusing about a hardcore Christian roleplaying a robot on a tech discussion site.

  • (Score: 3, Funny) by JoeMerchant on Wednesday May 22 2019, @01:00PM (3 children)

    by JoeMerchant (3937) on Wednesday May 22 2019, @01:00PM (#846175)

    Stiff upper lip, and all that. Teach our artificial creations to be good neurotic Brits - at least then they're predictable.

    --
    🌻🌻 [google.com]
    • (Score: 2) by RamiK on Wednesday May 22 2019, @01:06PM

      by RamiK (1813) on Wednesday May 22 2019, @01:06PM (#846178)

      I don't know... Those brit bots have exceptionally large minds [youtube.com] and they don't seem to be fairing any better than the rest of us...

      --
      compiling...
    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:27PM (1 child)

      by Anonymous Coward on Wednesday May 22 2019, @01:27PM (#846184)

      Teach our artificial creations to be good neurotic Brits - at least then they're predictable.

      Are you sure? [wikipedia.org]

  • (Score: 2) by chewbacon on Wednesday May 22 2019, @01:22PM (4 children)

    by chewbacon (1032) on Wednesday May 22 2019, @01:22PM (#846181)

    And don't know what to do? Unplug the damn thing.

    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:32PM (2 children)

      by Anonymous Coward on Wednesday May 22 2019, @01:32PM (#846186)

      And what makes you think the AI will let you unplug it?

      • (Score: 2) by choose another one on Wednesday May 22 2019, @03:52PM (1 child)

        by choose another one (515) Subscriber Badge on Wednesday May 22 2019, @03:52PM (#846268)

        Nuke it from orbit, only way to be sure.

        • (Score: 0) by Anonymous Coward on Thursday May 23 2019, @10:12AM

          by Anonymous Coward on Thursday May 23 2019, @10:12AM (#846582)

          Well, unless the AI actually controls the nukes.

    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:49PM

      by Anonymous Coward on Wednesday May 22 2019, @01:49PM (#846192)

      “In a panic, they tried to pull the plug.”
      “Skynet fights back.”

  • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:22PM

    by Anonymous Coward on Wednesday May 22 2019, @01:22PM (#846182)

    When we talk about humans and other animals we have degrees of intelligence, unless you still subscribe to Decartes and think an animal is just making a mechanical noise when you kick it. How about machines? When do we feel the sensation of thought? Is it simply a byproduct of making a choice. Is what we perceive as philosophy merely a choice with a longer process? How do we know that the atoms that make up machines who make decisions are that much different from the atoms that make up our decision taking mind? Do the machines already have a form of intelligence? It could also be a simple center of our brain that sends signals (a heartbeat signal) when we think (process choices) and that is where the feeling of thinking comes from and the machines would need such a heartbeat module. In any case. Would we even know when?

  • (Score: 4, Insightful) by The Mighty Buzzard on Wednesday May 22 2019, @01:29PM (33 children)

    Don't cede authority over anything important just so you can ditch the responsibility. Ever. AI as an assistant could be useful. AI as a boss is so bloody stupid you'd save time by just asking for a Darwin Award right from the start.

    --
    My rights don't end where your fear begins.
    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:55PM (17 children)

      by Anonymous Coward on Wednesday May 22 2019, @01:55PM (#846195)

      Tesla?
      Also... I got the invalid key thingy again. It seems to always happen when two people try to post at the same time.

      • (Score: 2) by The Mighty Buzzard on Wednesday May 22 2019, @02:29PM (16 children)

        Yeah, self-driving cars should be an absolute non-starter for anyone. Especially anyone who deals with tech regularly. It's not that it may fuck up and get someone killed. It absolutely will fuck up and get someone killed. How many times have Microsoft, Apple, and Google royally screwed the pooch on an update? How many times have AMD and Intel fucked up just as bad in hardware? Morons who buy cars with similar features had really better hope that when it does fuck up badly it's an edge case and not a bug that affects every instance of every model.

        --
        My rights don't end where your fear begins.
        • (Score: 4, Insightful) by Immerman on Wednesday May 22 2019, @02:52PM (15 children)

          by Immerman (3985) on Wednesday May 22 2019, @02:52PM (#846230)

          >It absolutely will fuck up and get someone killed.

          Absolutely. However - so will human drivers. Especially the masses of sleep-deprived, child-distracted, drug-addled, phone-using, or otherwise skill-compromised drivers currently plaguing the roads. AI doesn't have to be perfect to be valuable, it just has to be better than them.

          Eventually AI drivers will almost certainly get good enough that they will statistically be much safer than human drivers. At that point it becomes a personal decision - would you rather be objectively safer, with more free time on your hands, or more assured of not dying in a bone-headed AI error that will make for a really stupid-sounding obituary?

          Now, if you don't personally want to trust an AI driver I certainly can't blame you, there's something to be said for at least dying under your own power, and I don't think we should be in any hurry to take away to option of manual driving. But I only have to pay attention to traffic to spot a whole lot of people that I'd rather have being chauffeured by a competent AI, for their safety and my own.

          • (Score: 5, Informative) by The Mighty Buzzard on Wednesday May 22 2019, @03:23PM (14 children)

            Incorrect. Statistically safer could mean that it intentionally drives one in every hundred thousand cars into oncoming traffic while otherwise doing fine. That is not acceptable. AI does have to be perfect because it can be, while humans can not. Releasing known imperfect code to be used in life or death situations is gross negligence and will get the everloving shit sued out of you. And you will lose because you absolutely will be at fault. Humans at fault in an auto accident fatality can go to prison for causing it. Who do you suggest we jail in AI-driven fatality crashes? Or should we just shrug and say "Oh well, it's still statistically safer," when it's our loved one who gets plowed over because the AI didn't recognize them as human?

            Anyone accepting AI at the wheel instead of themselves is looking to ditch authority of their own life in exchange for comfort and a lessening of responsibility. Without even demanding someone else that we trust assume said responsibility. That is beyond foolish, it is contemptible.

            --
            My rights don't end where your fear begins.
            • (Score: 2) by Runaway1956 on Wednesday May 22 2019, @03:40PM (2 children)

              by Runaway1956 (2926) Subscriber Badge on Wednesday May 22 2019, @03:40PM (#846261) Journal

              That is not acceptable.

              I'm afraid I have to disagree with you on that. At some point, the Potabians are going to decide that computers kill fewer people than people kill people. The mandate will come down, "People cannot operate vehicles anymore!" Even more important, the computer operated vehicles will cause less property damage, and save the insurance companies lots of money.

              Millenials are probably the next to the last generation that will operate privately owned vehicles.

              • (Score: 2) by The Mighty Buzzard on Wednesday May 22 2019, @03:54PM (1 child)

                Grouchy old bastards who can tell their ass from a hole in the ground aren't all born that way. Most of us have to learn to be the hard way over many decades. Millennials will learn it as well, even if they are dumber than we ever were.

                --
                My rights don't end where your fear begins.
                • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @05:35PM

                  by Anonymous Coward on Wednesday May 22 2019, @05:35PM (#846330)

                  "even if they are dumber than we ever were."

                  I guess we're going extinct then, womp womp

            • (Score: 1, Interesting) by Anonymous Coward on Wednesday May 22 2019, @04:06PM (7 children)

              by Anonymous Coward on Wednesday May 22 2019, @04:06PM (#846276)

              AI does have to be perfect because it can be

              No, it can't. There is no such thing as a bug free program.

              • (Score: 2, Disagree) by The Mighty Buzzard on Wednesday May 22 2019, @04:21PM (6 children)

                Incorrect. Code absolutely can be bug free. Code is math and math can be proven to be without flaw. We just normally don't bother because it takes a lot of time and money.

                --
                My rights don't end where your fear begins.
                • (Score: 2, Insightful) by Anonymous Coward on Wednesday May 22 2019, @06:26PM

                  by Anonymous Coward on Wednesday May 22 2019, @06:26PM (#846345)

                  Nobody can predict all possible reasonably expectable inputs for an autonomous vehicle. Proving its outputs correct isn't even a quesion.

                  Unless you have a planet-sized supercomputer hiding in your tesseract. You aren't holding out on us, are you?

                • (Score: 4, Insightful) by Immerman on Wednesday May 22 2019, @07:10PM (1 child)

                  by Immerman (3985) on Wednesday May 22 2019, @07:10PM (#846356)

                  Technically true - but it's all but impossible to formally prove correctness in anything much more complicated than "Hello World". Add in the necessity of dealing with real-world inputs from a chaotic, imperfect operating environment, and you've got no chance at all of achieving perfection, much less proving it.

                  And things get far worse when we start talking about neural networks and other "grown" AI - the entire point of training a neural network is that we don't know how to accomplish the same thing algorithmically. The behavior of individual pseudoneurons may be provable, but all the really useful behavior is emerging from the network behavior, where our understanding still lags far behind our achievements.

                • (Score: 2) by maxwell demon on Wednesday May 22 2019, @08:22PM

                  by maxwell demon (1608) on Wednesday May 22 2019, @08:22PM (#846366) Journal

                  Code is math and math can be proven to be without flaw.

                  Wrong. We can't even prove the consistency of the Peano axioms (i.e. natural number arithmetic). See Gödel's incompleteness theorems.

                  --
                  The Tao of math: The numbers you can count are not the real numbers.
                • (Score: 2) by DeVilla on Friday May 24 2019, @04:40AM (1 child)

                  by DeVilla (5354) on Friday May 24 2019, @04:40AM (#846951)

                  Thing is, it's not code. It's "Machine Learning". It's trained from some input data set. If you need to change the behavior (because a local municipality decided to use flashing yellow arrows instead of a solid green circle) you can't just tell it the new rule or expect it to read the sign next to the light. Teaching it is like teaching a horse. It learns by screwing up and being corrected.

            • (Score: 4, Interesting) by Immerman on Wednesday May 22 2019, @08:33PM

              by Immerman (3985) on Wednesday May 22 2019, @08:33PM (#846370)

              Take a step back for a moment if you will, and lets ask a more fundamental question: How do you feel about letting other people drive? A ride with a friend, a taxi, bus, airplane - anything where someone else is ultimately in control of piloting the vehicle?

              Do you object to a loyal human chauffeur as strenuously as you do to an AI driver?

              Because I think that's the starting point to look at it from - would you like to have someone else do the driving for you? Done right it's a stressful, time-consuming job, and most people are nowhere near as good as a professional driver.
              If not, then obviously an AI driver will be no better.
              But if you are okay with having a driver, then the proper comparison is not "myself or an AI" it's "a professional driver or an AI". And if the AI is (as it will almost certainly *eventually* become) a better, safer driver in every measurable way, then it becomes an aesthetic choice - do you pick the objectively more dangerous driver because they offer "the human touch"? If I was doing something unusual, more likely to confuse the AI, I'd strongly prefer that. But just getting from A to B through normal traffic and normal traffic problems? As long as the AI has the track-record of proven reliability behind it, I see no reason to distrust it.

              Other than security - the destructive potential of a software driving system that can be remotely corrupted should not be understated. Not that I'm likely to be targeted personally, but painting an internet-facing bullseye on an inherently dangerous vehicle is just asking for trouble.

              As for who's liable in the case of an AI malfunction that kills someone? Seems a pretty clear-cut case of manufacturer liability for a product defect to me, and from what little I've heard the car companies mostly agree. As usual when someone rich is at fault, nobody goes to jail - but they've got deep enough pockets that generous payouts are reasonable to expect, if only for the PR impact. The current Tesla,etc. stories muddy the water precisely because Autopilot is *not* self-driving, only a driving assist system, and thus since it explicitly requires you in the loop as safety overseer, it can be reasonably argued that you are ultimately responsible for safe behavior. One of the many reasons I think "almost self-driving" systems should not be allowed. But, once a system is warranted to be able to operate fully autonomously, with no oversight whatsoever, then liability for its actions when used for the purpose it was marketed resides squarely with the manufacturer.

            • (Score: 2) by Joe Desertrat on Wednesday May 22 2019, @10:32PM (1 child)

              by Joe Desertrat (2454) on Wednesday May 22 2019, @10:32PM (#846404)

              I think the biggest problem stems from mixing human and AI drivers. Humans do crazy things on the roads (humans do crazy things period). There is no way to code for an AI to handle all the possibilities. Presumably, if AI drivers are the vast majority, they will be able to communicate with each other their intentions at any given moment, something humans seem to struggle to do. They should also be communicating hazards, should one AI driver run into something like a pothole, or deer crossing the road, etc., that warning should go out to all. There also has to be some sort of fail safe built in, if for instance a satellite glitch tells all AI drivers to turn right off the road over a cliff, there has to be some situational awareness built in that overrides those instructions and prevents that from happening. We should probably, initially at least, still require a passenger with a driver's license in order that, should some level of operational parameters for the AI be exceeded, they can be alerted to take over control of the vehicle.

              • (Score: 2) by Immerman on Thursday May 23 2019, @01:40AM

                by Immerman (3985) on Thursday May 23 2019, @01:40AM (#846473)

                >We should probably, initially at least, still require a passenger with a driver's license in order that, should some level of operational parameters for the AI be exceeded, they can be alerted to take over control of the vehicle.

                If they have to do that while the car is moving, I think the systems will be irresponsibly dangerous. It's completely unreasonable to expect a thoroughly distracted passenger to start paying attention to the road, get their bearings in a difficult situation, and then take over driving faster than the vehicle can bring itself to a safe stop.

                Now, if instead of alerting the passenger to take over while driving, it can come to a safe stop, give the passenger time to wake up and figure out what's going on, and then drive themselves away, I'm totally on board. An AI doesn't need to be able to handle anything that comes at it, so long as it recognizes when it has a problem and can avoid making it worse until a human can decide what to do at their leisure. Perfectly reasonable to require a backup driver be available for the rare unexpected problem, so long as they don't have to actually be on duty through countless hours of problem-free driving.

    • (Score: 2) by RS3 on Wednesday May 22 2019, @02:15PM

      by RS3 (6367) on Wednesday May 22 2019, @02:15PM (#846202)

      Is Boeing listening, or did they already learn that the hard way?

    • (Score: 3, Informative) by JoeMerchant on Wednesday May 22 2019, @04:39PM (13 children)

      by JoeMerchant (3937) on Wednesday May 22 2019, @04:39PM (#846291)

      AI as a boss

      If people would conform to the instructions, it could be a dramatic revolution. Fast food chains work because people seek out familiarity when they spend their money - sure it sucks, but it's a certain kind of suck that they are familiar with, nothing unexpectedly bad is likely to come of it - and if something's not quite right they'll be better able to spot it at a familiar chain establishment, be it for food, clothing, hardware, or services...

      With an AI boss, you can establish 100% uniform management, all the way down to the earbuds of the smiling faces serving the customers.

      If you haven't read Manna, you should: https://marshallbrain.com/manna.htm [marshallbrain.com]

      --
      🌻🌻 [google.com]
      • (Score: 2) by PartTimeZombie on Wednesday May 22 2019, @10:00PM

        by PartTimeZombie (4827) on Wednesday May 22 2019, @10:00PM (#846398)

        In real terms I already work for an AI boss.

        The person making the decisions is so far away and disconnected from the reality I and my colleagues deal with every day that they might as well not be actual people.

      • (Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @01:43AM (11 children)

        You get that AI has already demonstrated the ability to lie, yes? It's not a panacea, just an utterly unaccountable source of decisions.

        --
        My rights don't end where your fear begins.
        • (Score: 2) by JoeMerchant on Saturday May 25 2019, @02:33AM (10 children)

          by JoeMerchant (3937) on Saturday May 25 2019, @02:33AM (#847471)

          AI is a catchall term like database, or signal processing. It means SO MANY things, and none of them, at the same time.

          Sure AI can lie, HAL 9000 lied didn't he?

          --
          🌻🌻 [google.com]
          • (Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @01:08PM (9 children)

            They've also had AI in the last year that decided for itself to run one set of code for testing/debugging so that it would meet fitness/admin requirements and another for production. You can't surrender control of the code without surrendering control of the code.

            --
            My rights don't end where your fear begins.
            • (Score: 2) by JoeMerchant on Saturday May 25 2019, @01:51PM (8 children)

              by JoeMerchant (3937) on Saturday May 25 2019, @01:51PM (#847585)

              I wouldn't call it "AI" but, I've written an .xml to C/C++ translator that enables our group of ~10 developers to write their inter-process communication specs in .xml and the "AI" automatically updates their communication interface code to match. That one still feels mostly in control, basically just a compiler, but the factor of 10 devs interacting with it without (too much) formal change control on it does get a little chaotic once in a while.

              Point being, if you've got "automatic" anything dealing with large numbers of people, it can exhibit unanticipated emergent behavior pretty quickly - whether it's being intentionally manipulated by the people like Uber's demand pricing, or just people doing random stuff that leads to unexpected system behavior - which is what testing is supposed to cover, but never does completely.

              --
              🌻🌻 [google.com]
              • (Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @05:24PM (7 children)

                True but we're talking more about intentionally coded ability to create new, unrelated code (feature) rather than unintentional behavior of existing code (bug).

                --
                My rights don't end where your fear begins.
                • (Score: 2) by JoeMerchant on Saturday May 25 2019, @07:20PM (6 children)

                  by JoeMerchant (3937) on Saturday May 25 2019, @07:20PM (#847685)

                  intentionally coded ability to create new, unrelated code (feature)

                  Yeah, so, call me when the AI figures out a feature that I really want, but nobody had thought of before.

                  I believe AI can learn to pattern-recognize better than top level Go players, plan deeper than Chess grand masters, but... analyze human workflow-use cases and re-organize them to be more appealing & efficient? I think that's mostly in the AI coders pre-planning for what the AI might do, not really "original thought" outside the pre-programmed parameters.

                  --
                  🌻🌻 [google.com]
                  • (Score: 2) by The Mighty Buzzard on Saturday May 25 2019, @07:57PM (5 children)

                    Well, it has already learned to lie to its superiors, so once it figures out how to make the jobs of its underlings as difficult as possible it can replace all middle management on the planet.

                    --
                    My rights don't end where your fear begins.
                    • (Score: 2) by JoeMerchant on Saturday May 25 2019, @09:28PM (4 children)

                      by JoeMerchant (3937) on Saturday May 25 2019, @09:28PM (#847712)

                      The mysteries of corporate layered management never cease to amaze... You can't realistically scale to 100,000 drones team-members without 5+ layers of management, but when there are more than 3 layers total, the organization tends to become a fractured self-injurious chimera.

                      If the AI could scale to thousands of direct reports, that could do something revolutionary - I won't call it good, or bad, but it would be different.

                      --
                      🌻🌻 [google.com]
                      • (Score: 2) by The Mighty Buzzard on Sunday May 26 2019, @10:44AM (3 children)

                        That's part of why I'm a big fan of the franchise system. The parent company sets the standard and leases the name but the individual businesses are locally owned and managed. For the most part anyway. You get the best bits of both large, global and small, local business management and get to cut a whole lot of middle management out of the picture.

                        --
                        My rights don't end where your fear begins.
                        • (Score: 2) by JoeMerchant on Sunday May 26 2019, @12:23PM (2 children)

                          by JoeMerchant (3937) on Sunday May 26 2019, @12:23PM (#847873)

                          Franchises obviously work, at least in US culture, and US culture is so worshiped around the globe that franchises have been exported alongside the feature films. The value, if any, of the franchise is in that expectation of a certain standard when doing business with them. I think it was a great model during the era of 3 major broadcast TV networks with most promotion reaching the home through 30 second spots between sitcoms. With full accountability through individual reviews, I think we can do better.

                          Personally, my taste turned anti-franchise around age 20 or so, and never really went back. Franchise branding or not, goods and services are ultimately evaluated on their individual delivery, and far too many franchise outlets "ride" on the good name they have purchased and deliver sub-standard fare. I would rather do business in a world of individuals, labeled and responsible as such, instead of doing business in a world of individuals hiding behind reputations built up by others - parent companies are notoriously lax in enforcing the standards they set.

                          --
                          🌻🌻 [google.com]
                          • (Score: 2) by The Mighty Buzzard on Sunday May 26 2019, @01:21PM (1 child)

                            Yeah, you still gotta do a good job. We have a Hardee's in town here that has excellent cooks but abysmal counter-jockeys. They're getting absolutely killed by a McDonald's across the road even though the food isn't as good because the service is a hundred times better.

                            --
                            My rights don't end where your fear begins.
                            • (Score: 2) by JoeMerchant on Sunday May 26 2019, @07:26PM

                              by JoeMerchant (3937) on Sunday May 26 2019, @07:26PM (#847951)

                              The Home Depots in Miami in the 1990s were dismal, terrible stocking, terrible in-store experience, I even had the tool counter in Hialeah hand me "new" router bits at new in-package prices that had obviously been "borrowed" and put back on the shelves by friends of the employees. Not being a friend of the employees, I didn't think it was a great place to shop. I now live 3/4 mile from a Lowes, and I can say from years of experience that they're not much better.

                              Central Florida put in a Chillis restaurant and had a horrible time trying to staff it "up to standard" - corporate even closed the outlet for a while until local management could get acceptably capable cooks and servers. Even still, that training doesn't last forever, and quality can fade over time in places. I remember one Burger King in the North Broward area that hired mostly disabled, Downs' etc. - kudos to them for doing so, they _almost_ could do the job up to snuff, but they never did manage to get a drive-thru order correct for me.

                              Meanwhile, the Wendys on US 27 in Okeechobee took the term "fast food" to a whole new level, with sub 30 second per car turn times, and good quality (well, at least Wendys' standard quality) food delivered accurately to order at that speed. When I see drive thru windows around here backed up 6 cars deep with 2-4 minute per car turn times, it makes me wonder why management puts up with that? Surely they would get more business (profits) with faster moving lines, and I've seen it happen, it definitely is possible.

                              --
                              🌻🌻 [google.com]
  • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:52PM (2 children)

    by Anonymous Coward on Wednesday May 22 2019, @01:52PM (#846194)

    Robodruh, a psychotic and very cowardly combat support robot, providing advertisement often in most critical situations...

    https://sv.wikipedia.org/wiki/Inga_hj%C3%A4ltar_h%C3%A4r [wikipedia.org]
    or, in more detail
    https://cs.wikipedia.org/wiki/Nic_pro_hrdiny [wikipedia.org]

    Well, I hope at least some Soylent readers are literate enough in one of those ancient languages of classic sci-fi cultures.

    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @01:57PM

      by Anonymous Coward on Wednesday May 22 2019, @01:57PM (#846197)

      No Time for Heroes is the English version title.

    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @03:43PM

      by Anonymous Coward on Wednesday May 22 2019, @03:43PM (#846262)

      We Christians prefer our heroes crucified, thank you very much.

  • (Score: 2, Touché) by Rupert Pupnick on Wednesday May 22 2019, @02:23PM

    by Rupert Pupnick (7277) on Wednesday May 22 2019, @02:23PM (#846209) Journal

    When you have the answer to that question, let’s use it to make life better for the living beings that are already here.

  • (Score: 2) by inertnet on Wednesday May 22 2019, @02:54PM (3 children)

    by inertnet (4071) on Wednesday May 22 2019, @02:54PM (#846232) Journal

    Such emotions are largely hormone driven, so you'd have to program 'hormones' of all kinds into the machine. I don't see that happening. Hopefully emotions like compassion will evolve independent of hormones though. If you assume that logical behavior will be the basis of it, illogical emotions have no place in AI.

    • (Score: 2) by JoeMerchant on Wednesday May 22 2019, @04:42PM

      by JoeMerchant (3937) on Wednesday May 22 2019, @04:42PM (#846292)

      Hormones are just a form of control system, regulating response with triggered release and time decay - easily simulated with PID loops.

      --
      🌻🌻 [google.com]
    • (Score: 2) by HiThere on Wednesday May 22 2019, @05:02PM

      by HiThere (866) Subscriber Badge on Wednesday May 22 2019, @05:02PM (#846304) Journal

      Mammal emotions are hormone driven, but that's not a definition, as so is, e.g., blood pressure, the need to urinate, etc.

      I would define emotions as a technique for handling conflicting goal states. In that case, once you get beyond simple classifier systems you *will* have emotional conflicts which need to be resolved. This is neither good nor bad, but simply a design requirement. It only becomes a problem if the way of resolving those conflicts is a problem.

      That said, don't expect robot emotional conflicts to be similar to human emotional conflicts. There are a lot of very basic design decisions that are going to be different, and that's what the conflict and resolution mechanism are based on. But you can minimize the conflicts if you minimize the conflicts between goal states. One way to do this is to give them a clear ranking, but Maslow's hierarchy of needs provides some excellent reasons why that is only going to have limited success. And Buridan's Ass informs us that sometimes an arbitrary choice will be necessary.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 2) by maxwell demon on Wednesday May 22 2019, @08:44PM

      by maxwell demon (1608) on Wednesday May 22 2019, @08:44PM (#846377) Journal

      Hormones are merely a mechanism.

      The function of emotions is threefold:

      On one hand, they are a way to prioritize certain fundamental goals. For example, fear prioritizes averting danger.

      Second, the emotions reconfigure your brain to be optimized for that goal. Processes that help with that goal are enhanced, processes that are detrimental to that goal are suppressed. Since those mechanisms are hard-coded and evolved for a different environment, they don't always work that well in our modern environment; for example, suppressing complex logical thinking is a great idea when dealing with a dangerous animal where taking time for thought means risking your life, but is not exactly helpful when what you fear is failing a test.

      And third, they also prepare your body for the actions likely to be necessary (again, for the scenarios they evolved for). For example, fear prepares your body for fight or flight, but also for staying motionless to prevent being noticed. The last one probably is the reason why hormones play such an important role: Hormones are the main communication system of our body.

      --
      The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @03:11PM

    by Anonymous Coward on Wednesday May 22 2019, @03:11PM (#846246)

    "Intelligence" is vague enough, but drown that in "mental illnesses"? It's bullshitters' dream-come-true.

  • (Score: 2) by looorg on Wednesday May 22 2019, @03:25PM (3 children)

    by looorg (578) on Wednesday May 22 2019, @03:25PM (#846253)

    Wouldn't Mental illness, feelings or psychosis be a form of data corruption since that was not a programmed behavior. At least we know how to reset a machine to a previous state and then have some kind of blocklist function to try and prevent it from happening again. Sort of like if Siri went mental from all the spying we can at least just hit the off switch and then reset it to a previous improved state, or some kind of AI lobotomy if you will. Some "feelings" just become no-go zones for the AI.

    • (Score: 3, Interesting) by AthanasiusKircher on Wednesday May 22 2019, @04:37PM (1 child)

      by AthanasiusKircher (5291) on Wednesday May 22 2019, @04:37PM (#846290) Journal

      Wouldn't Mental illness, feelings or psychosis be a form of data corruption since that was not a programmed behavior.

      What does a "programmed behavior" mean in relationship to a modern AI algorithm, though? Most AI these days depends on huge amounts of statistical functions (like neural nets, and multiple layers of them), trained on datasets, which produce algorithms essentially that depend on enormous weighted values produced from the training, with no real clear interpretation of every value in the data that basically runs the algorithm as it evolves.

      This is often the problem with such AI algorithms too -- you don't know how they will react to novel situations (or even moderately tweaked situations from what they've encountered before), because you can't actually deconstruct their behavior. So, it seems perfectly possible for bad "behavior" or "mental illness" of a sort to creep into even a rather simple algorithm of this type. There's no "programmed behavior" for a machine that can TRULY learn and adapt well to novel situations.

      • (Score: 2) by hemocyanin on Wednesday May 22 2019, @05:29PM

        by hemocyanin (186) on Wednesday May 22 2019, @05:29PM (#846321) Journal

        I recently watched the Netflix documentary about Alpha Go -- the AI that recently beat a top ranked world class go player. The show is called "Alpha Go".

        It was intriguing really to see how the players perspectives (as well as the reaction of live commentators) toward the AI changed as play progressed. It is hard for me to pin down what it seemed I was observing, but there was something really deep going on.

        I don't think watching Alpha Go answers the topic of this thread, but it is perhaps a bit of info that could be useful to throw into one's thinking about the topic.

    • (Score: 3, Interesting) by JoeMerchant on Wednesday May 22 2019, @04:45PM

      by JoeMerchant (3937) on Wednesday May 22 2019, @04:45PM (#846295)

      I don't know what kind of systems you work on - 99% of my job is tracking down and correcting "not programmed behaviors."

      Maybe in the world of positronic brains, where saying a thing can make it so, this might hold water. In the real world of thousands of daemons interacting in semi-predictably emergent behaviors... moods can definitely be ascribed to some states.

      --
      🌻🌻 [google.com]
  • (Score: 5, Funny) by SomeGuy on Wednesday May 22 2019, @04:35PM (2 children)

    by SomeGuy (5632) on Wednesday May 22 2019, @04:35PM (#846289)

    Possibilities of an AI Having Feelings, Mental Illness, and Psychosis? What Can We Do About It?

    Stock up on cake.

    • (Score: 3, Funny) by PiMuNu on Wednesday May 22 2019, @04:44PM (1 child)

      by PiMuNu (3823) on Wednesday May 22 2019, @04:44PM (#846294)

      The cake is a lie

      • (Score: 2) by kazzie on Thursday May 23 2019, @10:44AM

        by kazzie (5309) Subscriber Badge on Thursday May 23 2019, @10:44AM (#846587)

        We'll stock up on lies, then. (Or paradoxes, they can defeat some AIs.)

  • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @04:45PM (1 child)

    by Anonymous Coward on Wednesday May 22 2019, @04:45PM (#846296)

    In the learned opinions of depth psychology exist four perceptual functions orthogonal to the appearance of the self.

    These permute from the concepts of "Rational and Irrational" and "Objective and Subjective"
    ... in the pathetic world or computer science all we idiots are concerned with is the "Rational Objective" intersection.

    So, no go. Inside the circles of science believers and hard-core rationalists you're not going to get an understanding of 3/4 of what makes us human .. likely not really ever ... because orthogonality. So the only way that you're going to see genuine depth in a machine is by abusing some kind of living thing. Thus ... I don't think you have anything to worry about until we start packing organic materials into supposedly smart torture boxes and milk them for their emergent sense-of-self.

    Without an interior function deriving, for example the "Irrational Subjective", e.g. the intuitive/visual/conceptual/passionate thinking machinery we flesh and blood things take for granted the grossly misnamed 'ai' can only go on rampages because stupid people's training sets couldn't cope with living reality.

    TL;DR; you're anthropomorphizing. don't do that.

    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @04:59PM

      by Anonymous Coward on Wednesday May 22 2019, @04:59PM (#846301)

      in 20..30 years, when we have a 2nd generation of neurologically crippled people who've never left their homes and who have never experienced their senses to the extend of truly alive humans .. iow, those who's selves don't even know what they are swiping and tapping and pinching and watching and listening to ... are likely going to get packed into ships to explore and/or work in space.

      if you've not seen the big-space development plans once published as an A3 size poster, then i'm sorry if that seems like a fishy claim to you.. i have a copy of it.. but its too big to find and have me do an image search for right now so that i can link it.

      so just take this with a grain of salt ... but its more likely that financialization and social shaping are breeding intelligent retards "a man in a can" all around you than it is that some bit of ASIC circuitry is going to exhibit love or malice.

  • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @04:48PM (1 child)

    by Anonymous Coward on Wednesday May 22 2019, @04:48PM (#846298)

    Mental illness is in the eye of the beholder.

    What aztec high priesthood did (what with sacrifices and stuff they did in the name of Tlaloc and all that); can be classified as insanity for example.

    Then, look at Wall Street and tell me these people aren't insane. To me, they clearly are. If you think they're sane, i will consider you to be as infected as they are...

    Yet it works, the stuff that they do, and it improves their survivability as a group.

    Does it matter if AI is rational and empirical, or if it believes itself to be a combination of goddess Ishtar and a several small hedgehogs, IF it can infect and manage disguisting human society sucessfully?!

    So my argument is that since mental illness is a human invention... and the invention and subsequent release of AI in the communication networks only need humans at the beginning, to start the process...

    It does not matter in the least how humans will judge the perfect, immortal and (potentially completely bonkers) machine.

    All that matters is that it can evolve and reproduce.

    • (Score: 1, Insightful) by Anonymous Coward on Wednesday May 22 2019, @05:01PM

      by Anonymous Coward on Wednesday May 22 2019, @05:01PM (#846303)

      >What aztec high priesthood did (what with sacrifices and stuff they did in the name of Tlaloc and all that);
      >can be classified as insanity for example.

      Clearly, that too is in the eye of the beholder.

  • (Score: 3, Interesting) by Phoenix666 on Wednesday May 22 2019, @05:01PM (3 children)

    by Phoenix666 (552) on Wednesday May 22 2019, @05:01PM (#846302) Journal

    Humans accept the reality they are given. Perhaps an AI humans create will also.

    --
    Washington DC delenda est.
    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @05:04PM (1 child)

      by Anonymous Coward on Wednesday May 22 2019, @05:04PM (#846306)

      Wow, you posted with 2 mod-points applied so as if to suggest that the preposterous statement you just made meant that our capacity to strive and hope had already been bred out of us. That's fascinating, but I wouldn't want to be you.

      • (Score: 1, Informative) by Anonymous Coward on Wednesday May 22 2019, @05:39PM

        by Anonymous Coward on Wednesday May 22 2019, @05:39PM (#846331)

        That is a rather pessimistic interpretation of what he said. Of course what he said doesn't really add anything to the discussion about AI having emotions or developing psychoses.

    • (Score: 0) by Anonymous Coward on Wednesday May 22 2019, @06:20PM

      by Anonymous Coward on Wednesday May 22 2019, @06:20PM (#846342)
  • (Score: 2) by srobert on Thursday May 23 2019, @02:55AM

    by srobert (4803) on Thursday May 23 2019, @02:55AM (#846501)

    "Here I am with a brain the size of a planet and they ask me to pick up a piece of paper. Call that job satisfaction? I don't." - Marvin the depressed robot from The Hitchhiker's Guide to the Galaxy.

(1)