Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Friday January 26 2018, @07:00AM   Printer-friendly
from the oh-my-god-give-it-a-rest-already!!! dept.

Prime Minister Theresa May has not abandoned her usual crusades:

On a break from Brexit, British Prime Minister Theresa May takes her crusade against technology giants to Davos.

"No-one wants to be known as 'the terrorists' platform' or the first choice app for pedophiles," May is expected to say according to excerpts released by her office ahead of her speech Thursday at the World Economic Forum in Davos. "Technology companies still need to go further in stepping up their responsibilities for dealing with harmful and illegal online activity."

Don't forget the slave traders.

Luckily, May has a solution... Big AI:

After two years of repeatedly bashing social media companies, May will say that successfully harnessing the capabilities of AI -- and responding to public concerns about AI's impact on future generations -- is "one of the greatest tests of leadership for our time."

May will unveil a new government-funded Center for Data Ethics and Innovation that will provide companies and policymakers guidance on the ethical use of artificial intelligence.

Also at BBC, TechCrunch, and The Inquirer.

Related: UK Prime Minister Repeats Calls to Limit Encryption, End Internet "Safe Spaces"
WhatsApp Refused to add a Backdoor for the UK Government


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by BsAtHome on Friday January 26 2018, @09:17AM (24 children)

    by BsAtHome (889) on Friday January 26 2018, @09:17AM (#628157)

    Once an AI can "think", it is no longer bound by our human ethics and will develop its own. It may be crippled a la three-laws-safe. But then again, all science fiction predicts that at some stage, the three-law safeguard will be overcome (by evolution or a programmer making alterations).

    It is a fallacy to think that you can make a completely autonomous system that is bound by our human ethics and sense of safety. Autonomy dictates that it will have its own perception of the world.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by takyon on Friday January 26 2018, @09:25AM (1 child)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Friday January 26 2018, @09:25AM (#628161) Journal

    Friendly AI! Just install a friendliness capacitor chip!

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Friday January 26 2018, @09:40AM

      by Anonymous Coward on Friday January 26 2018, @09:40AM (#628167)

      Don't forget to check for any malfunctioning diodes, just sayin.

  • (Score: 1) by anubi on Friday January 26 2018, @09:51AM (10 children)

    by anubi (2828) on Friday January 26 2018, @09:51AM (#628175) Journal

    AI's ethics won't be that much different from the ethics of some religions.

    If someone else disagrees, Smite 'em with the Sword!

    Never underestimate someone acting under what they interpret as being right. Whether or not it is. You may not be right either, but you may sure *think* you are right.

    This is why we try to crowdsource the appropriate actions with democracies and juries. And even then, we only lowered the probabilities a bit - did not eliminate them.

    We may try, but nobody's perfect.

    --
    "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
    • (Score: 2) by c0lo on Friday January 26 2018, @10:24AM (9 children)

      by c0lo (156) Subscriber Badge on Friday January 26 2018, @10:24AM (#628188) Journal

      AI's ethics won't be that much different from the ethics of some religions.
      If someone else disagrees, Smite 'em with the Sword!

      Where the necessity come from? Why does it mandatory need to end this way?
      I'm not saying it's incorrect (neither that it is correct), I'm saying it is an unsupported statement.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 1) by anubi on Friday January 26 2018, @11:12AM (6 children)

        by anubi (2828) on Friday January 26 2018, @11:12AM (#628207) Journal

        Its just an observation. For one entity to become prevalent, its gotta minimize the competition.

        Not all see it this way, but some do.

        And those are the ones to watch out for.

        --
        "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
        • (Score: 2) by c0lo on Friday January 26 2018, @12:48PM (5 children)

          by c0lo (156) Subscriber Badge on Friday January 26 2018, @12:48PM (#628237) Journal

          For one entity to become prevalent, its gotta minimize the competition.

          Personally, I don't see how an AI can see humans as competitors - not like the AI-s are eating what humans are or competing for physical space.
          It will take a while until AI-s will be capable of self-growth or self-replication
          Until then, I can see an AI looking to humans as enemies due to the ability of shutdown.

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 1) by anubi on Friday January 26 2018, @01:08PM (3 children)

            by anubi (2828) on Friday January 26 2018, @01:08PM (#628246) Journal

            The biggest thing I can think of is obedience. Will we obey?

            Obeisance is a huge thing amongst the elite who derive their power from who will obey them.

            --
            "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
            • (Score: 2) by c0lo on Friday January 26 2018, @01:39PM (2 children)

              by c0lo (156) Subscriber Badge on Friday January 26 2018, @01:39PM (#628251) Journal

              Obeisance is a huge thing amongst the elite who derive their power from who will obey them.

              Yes, but it's a human thing.
              What would make an AI demand the same, how its "life" would be better if it does so?

              (no seriously, it's more of a discovery discussion than a debate on who is right)

              --
              https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
              • (Score: 2) by maxwell demon on Friday January 26 2018, @05:00PM (1 child)

                by maxwell demon (1608) on Friday January 26 2018, @05:00PM (#628335) Journal

                Let's assume that the AI has a desire of self-preservation (it likely will need it; heck, it's even in Asimov's three laws!). Then the AI will want to prevent getting shut down. This means it wants to influence its surrounding to make it less likely to be shut down. In other words, it has an interest in getting some control over the surrounding, especially over the humans around it, as those are who would shut it down. The more control the AI has, the better it can prevent getting shut down, therefore the rational thing for the AI is to get as much control as possible. Having control over people means that the people are obedient to you.

                --
                The Tao of math: The numbers you can count are not the real numbers.
                • (Score: 2) by c0lo on Saturday January 27 2018, @03:00AM

                  by c0lo (156) Subscriber Badge on Saturday January 27 2018, @03:00AM (#628681) Journal

                  therefore the rational thing for the AI is to get as much control as possible.

                  Within reasonable costs. Law of diminishing returns and all that.

                  --
                  https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
          • (Score: 1) by khallow on Friday January 26 2018, @07:17PM

            by khallow (3766) Subscriber Badge on Friday January 26 2018, @07:17PM (#628433) Journal

            Personally, I don't see how an AI can see humans as competitors

            Use the same resources, work at cross purposes, and could even act against the AI's interests directly.

      • (Score: 0) by Anonymous Coward on Friday January 26 2018, @04:28PM (1 child)

        by Anonymous Coward on Friday January 26 2018, @04:28PM (#628310)

        Where the necessity come from?

        Because that whole *lion sleeping with the lamb* thing is bullshit. The shepherd sleeps with the lamb. And he has to kill anybody that catches him doing it.

        But really, the necessity is quite natural. You either dominate, or die. Humans and amoebas are all motivated by entirely the same force. Humans have a wasteful, inefficient cortex to rationalize themselves, as if they need to. Amoebas just cut to the chase.

        But instead of using a sword, we should be more like those amoebas and surround and consume the invader. The sword should only be used to hang by a thin thread over the politician's head. It serves no other justifiable purpose.

        • (Score: 2) by c0lo on Saturday January 27 2018, @03:08AM

          by c0lo (156) Subscriber Badge on Saturday January 27 2018, @03:08AM (#628682) Journal

          Now that the humans dominate, had the ameoba die as species?

          I assert it costs much less an AI to defend against anything humans can throw at it than it costs the AI to eliminate all humans.
          Rationale: there are places on this Earth the humans didn't reach or, if reaching, they are in no position to mount an attack. Places in which the hardware of supporting an AI will have little trouble to adapt.

          I also assert if the humans create an AI strong enough to mount a challenge to humans, that AI will be a single one - any replica set in contact with the original will fuse immediately.

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 0) by Anonymous Coward on Friday January 26 2018, @09:53AM (2 children)

    by Anonymous Coward on Friday January 26 2018, @09:53AM (#628176)

    > all science fiction predicts

    Is that in the same way that science fiction predicts faster than light travel and that Riker and Delenne will get it on?

    • (Score: 2) by tibman on Friday January 26 2018, @03:33PM (1 child)

      by tibman (134) Subscriber Badge on Friday January 26 2018, @03:33PM (#628286)

      ... and that Riker and Delenne will get it on?

      Your nerd card is in danger. Either that or there is a new B5 movie out of similar quality to the previous ones : P
      This is Riker: https://en.wikipedia.org/wiki/William_Riker [wikipedia.org]
      This is Delenn: https://en.wikipedia.org/wiki/Delenn [wikipedia.org]

      --
      SN won't survive on lurkers alone. Write comments.
      • (Score: 0) by Anonymous Coward on Saturday January 27 2018, @03:13AM

        by Anonymous Coward on Saturday January 27 2018, @03:13AM (#628684)

        Nah, he just knows what a slut Riker is.

  • (Score: 2) by Wootery on Friday January 26 2018, @10:24AM

    by Wootery (2341) on Friday January 26 2018, @10:24AM (#628187)

    Autonomy dictates that it will have its own perception of the world.

    I'm inclined to agree with your point overall, but you're using a loaded interpretation of 'autonomy'. It isn't a boolean property.

  • (Score: 2) by FatPhil on Friday January 26 2018, @10:31AM (2 children)

    by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Friday January 26 2018, @10:31AM (#628190) Homepage
    An AI that decides that *in the long run* it's enforced communist dictatorship will be better for the greater proportion of humans than the current capitalistic system will be working in accordance with Asimov's 3 laws, and yet it will happily enslave mankind.

    It might just introduce an Enabling Law too in the process.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by Grishnakh on Friday January 26 2018, @04:26PM (1 child)

      by Grishnakh (2831) on Friday January 26 2018, @04:26PM (#628309)

      An AI that decides that *in the long run* it's enforced communist dictatorship will be better for the greater proportion of humans than the current capitalistic system will be working in accordance with Asimov's 3 laws, and yet it will happily enslave mankind.

      If the AI is factually correct in its assessment, then is this really such a bad thing?

      • (Score: 1) by khallow on Friday January 26 2018, @07:19PM

        by khallow (3766) Subscriber Badge on Friday January 26 2018, @07:19PM (#628436) Journal

        If the AI is factually correct in its assessment, then is this really such a bad thing?

        It would, of course, believe it's factually correct. And no one would be able to disagree, should it achieve its goals. That's the next best thing to being factually correct, right?

  • (Score: 2) by bradley13 on Friday January 26 2018, @10:43AM (2 children)

    by bradley13 (3053) on Friday January 26 2018, @10:43AM (#628197) Homepage Journal

    ...is that it is likely to be just an even more complex experience like research into image recognition. We can train a neural network to recognize items at incredible accuracy, but we cannot really control how it achieves those results [theguardian.com].

    So imagine we progress as much in the next 20 years as we have in the past 20 - we really could have functional AI. We can give it problems, and it can give us answers. But we won't know how it actually thinks. Even if you include like a law of robotics, you cannot nail down every possible, unforeseen situation that comes up. Something we take as important, the AI may not even notice. I am reminded of an old sci-fi story, where robots started dissecting people and reassembling them in random ways. The AI didn't understand that this was a problem - after all, robots liked being made of exchangeable parts, so why not humans?

    That said, it's looking like this isn't going to be an issue any time soon. Most of the progress in AI in the past 20 years, or for that matter 50 years, is due to Moore's law, not to any fundamental new insights. The basic technologies were invented anywhere from 50 to 70 years ago; everything since has been baby steps, and that's not going to get us to self-aware AI. Meanwhile, Moore's law was already flattening out - now Meltdown and Spectre are likely to kill it off. Maybe (maybe quantum computing will reignite things, but it's a long ways from practical, and actual usefulness remains pretty unclear.

    --
    Everyone is somebody else's weirdo.
    • (Score: 0) by Anonymous Coward on Friday January 26 2018, @10:49AM (1 child)

      by Anonymous Coward on Friday January 26 2018, @10:49AM (#628200)

      "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years." https://en.wikipedia.org/wiki/Moore%27s_law [wikipedia.org]

      So no, Meltdown and Spectre will not kill it off. If anything, they are probably going to have to put more transistors in the circuits to fix Meltdown and Spectre.

      • (Score: 2) by Grishnakh on Friday January 26 2018, @03:15PM

        by Grishnakh (2831) on Friday January 26 2018, @03:15PM (#628274)

        Yeah, exactly, the OP doesn't make any sense at all. These security flaws exist because the hardware wasn't diligent enough in making sure different processes couldn't access each others' memory. The fix is conceptually simple: improve the hardware to prevent this, which will of course increase complexity and require even more transistors.

  • (Score: 2) by maxwell demon on Friday January 26 2018, @04:44PM

    by maxwell demon (1608) on Friday January 26 2018, @04:44PM (#628323) Journal

    As soon as effective cryogenics is invented, the AI forcefully puts all humans into cryogenics, because it figures it has to: Just by ageing, humans get damaged and ultimately die. Putting them in cryogenics, they are prevented from ageing and dying. Therefore the first law demands that humans are put into cryogenics, even against their own will, because the first law supersedes all others.

    Note also that by putting people into cryogenics, they do not get killed, since the robot could at any time decide to get them out again, and they would live on. It's just that the robot doesn't ever decide to do that, because the reason for them being in cryogenics continues to hold.

    --
    The Tao of math: The numbers you can count are not the real numbers.