Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Friday January 26 2018, @07:00AM   Printer-friendly
from the oh-my-god-give-it-a-rest-already!!! dept.

Prime Minister Theresa May has not abandoned her usual crusades:

On a break from Brexit, British Prime Minister Theresa May takes her crusade against technology giants to Davos.

"No-one wants to be known as 'the terrorists' platform' or the first choice app for pedophiles," May is expected to say according to excerpts released by her office ahead of her speech Thursday at the World Economic Forum in Davos. "Technology companies still need to go further in stepping up their responsibilities for dealing with harmful and illegal online activity."

Don't forget the slave traders.

Luckily, May has a solution... Big AI:

After two years of repeatedly bashing social media companies, May will say that successfully harnessing the capabilities of AI -- and responding to public concerns about AI's impact on future generations -- is "one of the greatest tests of leadership for our time."

May will unveil a new government-funded Center for Data Ethics and Innovation that will provide companies and policymakers guidance on the ethical use of artificial intelligence.

Also at BBC, TechCrunch, and The Inquirer.

Related: UK Prime Minister Repeats Calls to Limit Encryption, End Internet "Safe Spaces"
WhatsApp Refused to add a Backdoor for the UK Government


Original Submission

Related Stories

UK Prime Minister Repeats Calls to Limit Encryption, End Internet "Safe Spaces" 88 comments

Some things in life are very predictable... the Earth continues to orbit around the Sun and Theresa May is trying to crack down on the Internet and ban/break encryption:

In the wake of Saturday's terrorist attack in London, the Prime Minister Theresa May has again called for new laws to regulate the internet, demanding that internet companies do more to stamp out spaces where terrorists can communicate freely. "We cannot allow this ideology the safe space it needs to breed," she said. "Yet that is precisely what the internet and the big companies that provide internet-based services provide."

Her comments echo those made in March by the home secretary, Amber Rudd. Speaking after the previous terrorist attack in London, Rudd said that end-to-end encryption in apps like WhatsApp is "completely unacceptable" and that there should be "no hiding place for terrorists".

[...] "Theresa May's response is predictable but disappointing," says Paul Bernal at the University of East Anglia, UK. "If you stop 'safe places' for terrorists, you stop safe places for everyone, and we rely on those safe places for a great deal of our lives."

Last month New Scientist called for a greater understanding of technology among politicians. Until that happens, having a reasonable conversation about how best to tackle extremism online will remain out of reach.

End-to-end encryption is completely unacceptable? Now that's what I call an endorsement.

[more...]

WhatsApp Refused to add a Backdoor for the UK Government 9 comments

Submitted via IRC for SoyCow8963

The UK government has made no secret of its dislike of encrypted messaging tools, and it has made frequent reference to the problems WhatsApp causes it with regard to investigations into terrorism. Calls have been made by the government to force companies to allow access to encrypted content when asked.

In the wake of Theresa May's "more needs to be done about extremist content" speech, it has emerged that WhatsApp refused to add a backdoor that would allow the government and law enforcement agencies to access private conversations.

Sky News reports anonymous sources as saying that during the summer the government told WhatsApp to devise a way that would enable it to access encrypted messages. While WhatsApp already complies with government requests to provide meta data such as the name of an account holder, associated email address, and IP addresses used, it does not -- and, indeed, due to lack of access itself, cannot -- provide access to, or the content of encrypted messages.

Source: https://betanews.com/2017/09/21/whatsapp-backdoor-refusal/


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Informative) by bradley13 on Friday January 26 2018, @07:32AM (20 children)

    by bradley13 (3053) on Friday January 26 2018, @07:32AM (#628140) Homepage Journal

    Gizmodo has a nice article that discusses a letter sent by a Senator to the head of the FBI, demanding a list of the cryptography experts he has talked to, who claim that backdooring is possible without destroying security. He demands this list by 23 February 2018. Gizmodo ends pungently with "We're guessing it's a short list".

    Someone with a clue, and access, needs to publicly pose this question to any leader who comes out with this bullshit. Look at the trouble we have achieving security without deliberately crippling it! They are being advised by some collection of (a) other politicians, all doing a circle jerk, (b) law enforcement experts, or (c) sadly possible, IT people who care more about money than anything else.

    If it's the latter, we want to know who they are. Government policy should not be made in a vacuum, and there should be no reason for anonymity on an issue this important.

    --
    Everyone is somebody else's weirdo.
    • (Score: 2) by bradley13 on Friday January 26 2018, @07:34AM (3 children)

      by bradley13 (3053) on Friday January 26 2018, @07:34AM (#628141) Homepage Journal
      • (Score: 2, Informative) by pTamok on Friday January 26 2018, @11:30AM (2 children)

        by pTamok (3042) on Friday January 26 2018, @11:30AM (#628216)

        And here's a link to the source:

        https://www.wyden.senate.gov/download/?id=B31DD6FF-98E8-490C-B491-7DE6C7559C71&download=1 [senate.gov]

        Note that I can make no guarantees that the text of the electronic copy you download will be the same as mine. Web.archive.org does not have access, so in the absence of digital signatures, you have to trust that the copy you get is the same as the one sent by Sen. Wyden to Christopher A. Wray, Director, FBI.

        I think the key point is asking specifically to confirm that experts have been consulted and advised that it is possible to "design government access features into [...] products without weakening cybersecurity". Maybe such questions should become a mantra for reporters and journalists who hear requests for 'government back-doors'.

        Perhaps one or several of the government agencies entrusted with knowing about these things have found a novel and subtle approach that does do what people currently believe to be impossible. If so, it would be nice if they told us about it, and not leave non-secret research to find it independently (like the S-box setting of DES [archive.org]).

        There is a way in which the intelligence agencies have got their back-door: by having knowledge of inadvertent vulnerabilities before they become well-known; and, possibly, by adding vulnerabilities (Dual EC DRBG [wikipedia.org]). The U.S. Military take a great deal of trouble to assure the supply-chain of certain of their electronics, having secure fabs etc. If you subvert the supply chain of non-military electronic components, you can ensure that pretty-much undetectable back-doors can be included from the hardware upwards [phys.org]. Some expert commentators speculate that AES was chosen as an encryption method because of its susceptibility to side-channel attacks [wikipedia.org] when not carefully implemented in hardware - in other words, cryptographically, it is fine, but it is difficult to implement properly, so that in practical use, unless someone has worked very hard on the implementation, there will be ways of extracting keys by observing the AES hardware in action. Similarly, subverting hardware random-number generators built in to processors is difficult to prove, but can give you access to the required data - some examples [wikipedia.org].

        Poor implementation of cryptography by non-expert programmers and users also subverts keys in useful ways, such as the duplication/re-use of RSA factors used in implementations across the Internet [iacr.org].

        The above vulnerabilities mean that a material portion of data that its owners thought was protected by strong encryption actually wasn't, and could easily be decoded by third parties. It is reasonable to assume that government agencies will continue to take advantage of flaws that they find that are not publicly known, and may indeed subtly encourage such flaws to appear.

        • (Score: 1) by pTamok on Friday January 26 2018, @11:46AM

          by pTamok (3042) on Friday January 26 2018, @11:46AM (#628220)

          For those interested in RSA factor re-use, there's neat web-page that goes into it here: Understanding Common Factor Attacks: An RSA-Cracking Puzzle [loyalty.org]

        • (Score: 0) by Anonymous Coward on Friday January 26 2018, @10:46PM

          by Anonymous Coward on Friday January 26 2018, @10:46PM (#628601)

          Mod this guy +40 informative, ASAP!

          *AC high five*

    • (Score: 0) by Anonymous Coward on Friday January 26 2018, @08:57AM

      by Anonymous Coward on Friday January 26 2018, @08:57AM (#628153)

      If it's the latter, we want to know who they are.

      Probably an open borders type that avoids allegations of hypocrisy by sleeping with their front door open.

    • (Score: 3, Funny) by chromas on Friday January 26 2018, @09:23AM

      by chromas (34) Subscriber Badge on Friday January 26 2018, @09:23AM (#628158) Journal

      We're guessing it's a short list

      The list is likely nonexistent. You don't want him to dereference a null pointer, do you? Consequences will never be the same!

    • (Score: 1) by anubi on Friday January 26 2018, @09:44AM (8 children)

      by anubi (2828) on Friday January 26 2018, @09:44AM (#628171) Journal

      Its kinda wishful thinking to have something have two opposing properties simultaneously...

      Any of these lawmakers take a course in logic?

      What is the solution set to the AND function of Secure AND NotSecure? Null Set?

      Either its is secure or its not. Either the bolt stays in place, or it falls apart.

      There is no such thing as something that is "secure", but magically becomes insecure just because some badge-hat orders it to fall apart.

      However, the following logic does have a solution:
      ( Ability to tax other people ) AND ( Willingness to pay for what you want to hear ).

      --
      "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
      • (Score: 2) by bradley13 on Friday January 26 2018, @10:53AM (7 children)

        by bradley13 (3053) on Friday January 26 2018, @10:53AM (#628202) Homepage Journal

        I wonder if it would help if IT people had a handy analogy. Here's one that might make sense to anyone with an understanding of things mechanical:

        "Build a submarine. Make it able to go really deep. But you must build in one inwards opening hatch in the hull."

        --
        Everyone is somebody else's weirdo.
        • (Score: 1) by anubi on Friday January 26 2018, @11:10AM

          by anubi (2828) on Friday January 26 2018, @11:10AM (#628205) Journal

          Sounds just like some people I have worked for... they were good in business, not so good in engineering.

          --
          "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
        • (Score: 2) by Runaway1956 on Friday January 26 2018, @11:31AM (4 children)

          by Runaway1956 (2926) Subscriber Badge on Friday January 26 2018, @11:31AM (#628217) Journal

          An actual, current analogy isn't hard to come up with. Locks on doors and other things is pretty accurate. When most people run down to the hardware store, and buy a shiny new padlock for twenty bucks, they THINK that they have a secure device. No one can open that lock, unless they are given the key, right? WRONG! In point of fact, there are thousands of locks in circulation around the nation that can be opened by the same key. But, since those thousands are shipped to different stores in different cities, in different states, it's unlikely that any two people will ever attempt to open each other's locks.

          Then, there are master keys. Given a master key, you may be able to open twenty, or a hundred, or a thousand different locks of similar constructions. They need not even be the same brand of lock - I have succeeded in opening a Master Lock with a Brink's key.

          Beyond master keys, you have picks, which are capable of opening almost every keyed lock in existence. (There are a couple European brands which are extremely hard to manipulate - but those cost a helluva lot more than twenty bucks!)

          If a pick doesn't work for you, you can always call in a master locksmith. He has knowledge and tools with which to get into almost any lock in the world.

          Now - let's consider what lawmakers want. They are asking that all keyed locks open with a tool which only law enforcement may possess. Basically, law enforcement will have a master key which will open any keyed lock, anywhere - whether it be a padlock, a door lock, a chest, cabinet, or box lock. Every lock produced anywhere in the world must open with this master key, which only law enforment will have.

          And, naturally, as soon as the bill is introduced, six companies in the US and 35 more companies worldwide start producing these magical master keys. Within months after the bill becomes law, a hundred more companies start producing the locks. Soon, everyone in the world has a key to open any lock in the world.

          This all sounds very secure to me!! NOT!!!!

          Way back when locks and keys were first invented, Royalty should have just outlawed their use.

          • (Score: 2) by Grishnakh on Friday January 26 2018, @03:10PM (2 children)

            by Grishnakh (2831) on Friday January 26 2018, @03:10PM (#628272)

            When most people run down to the hardware store, and buy a shiny new padlock for twenty bucks, they THINK that they have a secure device. No one can open that lock, unless they are given the key, right? WRONG! In point of fact, there are thousands of locks in circulation around the nation that can be opened by the same key. ... it's unlikely that any two people will ever attempt to open each other's locks.

            It's exactly the same with car keys. There's only so many combinations you can have with a mechanical key, and I've heard of plenty of people who actually unlocked the wrong car's door, thinking it was their car, and then wondering why there was someone else's stuff inside. This doesn't happen so much now since most cars have keyless entry, but in the old days it wasn't *that* uncommon. Luckily, in modern cars, you can't drive away in the wrong car, you can only open the door.

            Now - let's consider what lawmakers want. They are asking that all keyed locks open with a tool which only law enforcement may possess.

            We actually have this already: luggage on commercial airlines is supposed to have a "TSA certified" lock, which "only" TSA has the master key for, or else they can bust your lock open to inspect the luggage. Of course, this key is pretty trivial to duplicate and there's photos and diagrams on the internet for it, so no one's luggage is safe any more. This is probably the very best analogy IMO, because it shows how silly this is. If a large government allows lots of low-level employees access to the keys, inevitably that's going to get out there, and now everyone's lock is unsafe. We've already seen this with the TSA locks.

            Way back when locks and keys were first invented, Royalty should have just outlawed their use.

            They had locks back in Roman times, predating the royalty of the Middle Ages. (While the Roman Empire had a hereditary system of choosing emperors for a while, until the Praetorian Guard just started selling the position, I wouldn't really associate the word "royalty" with the Romans, it really seems to connote the feudal system that came later.)

          • (Score: 0) by Anonymous Coward on Saturday January 27 2018, @04:00AM

            by Anonymous Coward on Saturday January 27 2018, @04:00AM (#628697)

            UK gets a new tank for the army. Just in case they are captured by the enemy, UK demanded they come with a secret weak point, so they can disable them easily.

            Problem is everyone can get such tanks even before the war, copy them over and over, and test and disassemble them until they figure where the weakness is. Because they are crypto programs, not physical tanks. The "limited money" or "hard to im/export by law" defenses don't fly, even less with civilian tools like software also used to connect with your bank webserver.

            Do UK leaders still want that tank model? If they say yes, we now know they are total morons, and deserve an army uprising to kick them out, as they will push the army into unwinnable wars, with faulty equipment, because they think politics can win over physics, maths, chemistry and all those "silly" sciences.

        • (Score: 2) by maxwell demon on Friday January 26 2018, @11:54AM

          by maxwell demon (1608) on Friday January 26 2018, @11:54AM (#628221) Journal

          Put a strong lock to your front door. But put the key under the doormat so that the police can enter if necessary.

          --
          The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 3, Informative) by Wootery on Friday January 26 2018, @10:14AM (4 children)

      by Wootery (2341) on Friday January 26 2018, @10:14AM (#628184)

      For completeness: that's Senator Ron Wyden, Democrat, Oregon.

      One of the very few politicians who actually listens to cryptographers.

      • (Score: 4, Interesting) by isostatic on Friday January 26 2018, @01:00PM (1 child)

        by isostatic (365) on Friday January 26 2018, @01:00PM (#628242) Journal

        "Senator Ron Wyden" would have been enough to identify him.

        I wonder if part of partisan problem, 'football mentality', 'us vs them', in the US is the insistence that every senator and congressman to attach a "D", "R" or "I", and a state to their name at every opportunity. While in the UK we say "Bob Bobson MP", then later you may find out they are a Labour MP, or a Tory, or a Green, if it's relevant. In the US It's "Bob Bobson (R-HI)".

        Rather than listen to the point, people at large see the tribe and accept or reject the point based on the tribe printed.

        • (Score: 3, Interesting) by Wootery on Friday January 26 2018, @04:40PM

          by Wootery (2341) on Friday January 26 2018, @04:40PM (#628317)

          I'm British. I figured his party affiliation was relevant. Why should I withhold that basic information about this politician?

          You're right about partisan politics, but I hardly think now is the time.

      • (Score: 1) by fustakrakich on Friday January 26 2018, @04:06PM (1 child)

        by fustakrakich (6150) on Friday January 26 2018, @04:06PM (#628303) Journal

        Yeah well, nobody listens to Ron Wyden, so he can speak up all he wants, like a lot of other "heroic" politicians. People with real influence over what becomes law keep their heads down and eyes forward. They will speak up (confess) after they retire and a book deal is signed. This is the standard procedure. It's all a lot of talk when we need real circumvention to render the issue moot.

        --
        La politica e i criminali sono la stessa cosa..
        • (Score: 0) by Anonymous Coward on Friday January 26 2018, @05:19PM

          by Anonymous Coward on Friday January 26 2018, @05:19PM (#628343)

          not likely true of the Pauls. i know Ron wouldn't have done that and i doubt Rand would either. Even though their politics differ some, he seems to be cut from the same cloth.

  • (Score: 4, Insightful) by BsAtHome on Friday January 26 2018, @09:17AM (24 children)

    by BsAtHome (889) on Friday January 26 2018, @09:17AM (#628157)

    Once an AI can "think", it is no longer bound by our human ethics and will develop its own. It may be crippled a la three-laws-safe. But then again, all science fiction predicts that at some stage, the three-law safeguard will be overcome (by evolution or a programmer making alterations).

    It is a fallacy to think that you can make a completely autonomous system that is bound by our human ethics and sense of safety. Autonomy dictates that it will have its own perception of the world.

    • (Score: 2) by takyon on Friday January 26 2018, @09:25AM (1 child)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday January 26 2018, @09:25AM (#628161) Journal

      Friendly AI! Just install a friendliness capacitor chip!

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Friday January 26 2018, @09:40AM

        by Anonymous Coward on Friday January 26 2018, @09:40AM (#628167)

        Don't forget to check for any malfunctioning diodes, just sayin.

    • (Score: 1) by anubi on Friday January 26 2018, @09:51AM (10 children)

      by anubi (2828) on Friday January 26 2018, @09:51AM (#628175) Journal

      AI's ethics won't be that much different from the ethics of some religions.

      If someone else disagrees, Smite 'em with the Sword!

      Never underestimate someone acting under what they interpret as being right. Whether or not it is. You may not be right either, but you may sure *think* you are right.

      This is why we try to crowdsource the appropriate actions with democracies and juries. And even then, we only lowered the probabilities a bit - did not eliminate them.

      We may try, but nobody's perfect.

      --
      "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
      • (Score: 2) by c0lo on Friday January 26 2018, @10:24AM (9 children)

        by c0lo (156) Subscriber Badge on Friday January 26 2018, @10:24AM (#628188) Journal

        AI's ethics won't be that much different from the ethics of some religions.
        If someone else disagrees, Smite 'em with the Sword!

        Where the necessity come from? Why does it mandatory need to end this way?
        I'm not saying it's incorrect (neither that it is correct), I'm saying it is an unsupported statement.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 1) by anubi on Friday January 26 2018, @11:12AM (6 children)

          by anubi (2828) on Friday January 26 2018, @11:12AM (#628207) Journal

          Its just an observation. For one entity to become prevalent, its gotta minimize the competition.

          Not all see it this way, but some do.

          And those are the ones to watch out for.

          --
          "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
          • (Score: 2) by c0lo on Friday January 26 2018, @12:48PM (5 children)

            by c0lo (156) Subscriber Badge on Friday January 26 2018, @12:48PM (#628237) Journal

            For one entity to become prevalent, its gotta minimize the competition.

            Personally, I don't see how an AI can see humans as competitors - not like the AI-s are eating what humans are or competing for physical space.
            It will take a while until AI-s will be capable of self-growth or self-replication
            Until then, I can see an AI looking to humans as enemies due to the ability of shutdown.

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
            • (Score: 1) by anubi on Friday January 26 2018, @01:08PM (3 children)

              by anubi (2828) on Friday January 26 2018, @01:08PM (#628246) Journal

              The biggest thing I can think of is obedience. Will we obey?

              Obeisance is a huge thing amongst the elite who derive their power from who will obey them.

              --
              "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
              • (Score: 2) by c0lo on Friday January 26 2018, @01:39PM (2 children)

                by c0lo (156) Subscriber Badge on Friday January 26 2018, @01:39PM (#628251) Journal

                Obeisance is a huge thing amongst the elite who derive their power from who will obey them.

                Yes, but it's a human thing.
                What would make an AI demand the same, how its "life" would be better if it does so?

                (no seriously, it's more of a discovery discussion than a debate on who is right)

                --
                https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
                • (Score: 2) by maxwell demon on Friday January 26 2018, @05:00PM (1 child)

                  by maxwell demon (1608) on Friday January 26 2018, @05:00PM (#628335) Journal

                  Let's assume that the AI has a desire of self-preservation (it likely will need it; heck, it's even in Asimov's three laws!). Then the AI will want to prevent getting shut down. This means it wants to influence its surrounding to make it less likely to be shut down. In other words, it has an interest in getting some control over the surrounding, especially over the humans around it, as those are who would shut it down. The more control the AI has, the better it can prevent getting shut down, therefore the rational thing for the AI is to get as much control as possible. Having control over people means that the people are obedient to you.

                  --
                  The Tao of math: The numbers you can count are not the real numbers.
                  • (Score: 2) by c0lo on Saturday January 27 2018, @03:00AM

                    by c0lo (156) Subscriber Badge on Saturday January 27 2018, @03:00AM (#628681) Journal

                    therefore the rational thing for the AI is to get as much control as possible.

                    Within reasonable costs. Law of diminishing returns and all that.

                    --
                    https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
            • (Score: 1) by khallow on Friday January 26 2018, @07:17PM

              by khallow (3766) Subscriber Badge on Friday January 26 2018, @07:17PM (#628433) Journal

              Personally, I don't see how an AI can see humans as competitors

              Use the same resources, work at cross purposes, and could even act against the AI's interests directly.

        • (Score: 0) by Anonymous Coward on Friday January 26 2018, @04:28PM (1 child)

          by Anonymous Coward on Friday January 26 2018, @04:28PM (#628310)

          Where the necessity come from?

          Because that whole *lion sleeping with the lamb* thing is bullshit. The shepherd sleeps with the lamb. And he has to kill anybody that catches him doing it.

          But really, the necessity is quite natural. You either dominate, or die. Humans and amoebas are all motivated by entirely the same force. Humans have a wasteful, inefficient cortex to rationalize themselves, as if they need to. Amoebas just cut to the chase.

          But instead of using a sword, we should be more like those amoebas and surround and consume the invader. The sword should only be used to hang by a thin thread over the politician's head. It serves no other justifiable purpose.

          • (Score: 2) by c0lo on Saturday January 27 2018, @03:08AM

            by c0lo (156) Subscriber Badge on Saturday January 27 2018, @03:08AM (#628682) Journal

            Now that the humans dominate, had the ameoba die as species?

            I assert it costs much less an AI to defend against anything humans can throw at it than it costs the AI to eliminate all humans.
            Rationale: there are places on this Earth the humans didn't reach or, if reaching, they are in no position to mount an attack. Places in which the hardware of supporting an AI will have little trouble to adapt.

            I also assert if the humans create an AI strong enough to mount a challenge to humans, that AI will be a single one - any replica set in contact with the original will fuse immediately.

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 0) by Anonymous Coward on Friday January 26 2018, @09:53AM (2 children)

      by Anonymous Coward on Friday January 26 2018, @09:53AM (#628176)

      > all science fiction predicts

      Is that in the same way that science fiction predicts faster than light travel and that Riker and Delenne will get it on?

      • (Score: 2) by tibman on Friday January 26 2018, @03:33PM (1 child)

        by tibman (134) Subscriber Badge on Friday January 26 2018, @03:33PM (#628286)

        ... and that Riker and Delenne will get it on?

        Your nerd card is in danger. Either that or there is a new B5 movie out of similar quality to the previous ones : P
        This is Riker: https://en.wikipedia.org/wiki/William_Riker [wikipedia.org]
        This is Delenn: https://en.wikipedia.org/wiki/Delenn [wikipedia.org]

        --
        SN won't survive on lurkers alone. Write comments.
        • (Score: 0) by Anonymous Coward on Saturday January 27 2018, @03:13AM

          by Anonymous Coward on Saturday January 27 2018, @03:13AM (#628684)

          Nah, he just knows what a slut Riker is.

    • (Score: 2) by Wootery on Friday January 26 2018, @10:24AM

      by Wootery (2341) on Friday January 26 2018, @10:24AM (#628187)

      Autonomy dictates that it will have its own perception of the world.

      I'm inclined to agree with your point overall, but you're using a loaded interpretation of 'autonomy'. It isn't a boolean property.

    • (Score: 2) by FatPhil on Friday January 26 2018, @10:31AM (2 children)

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Friday January 26 2018, @10:31AM (#628190) Homepage
      An AI that decides that *in the long run* it's enforced communist dictatorship will be better for the greater proportion of humans than the current capitalistic system will be working in accordance with Asimov's 3 laws, and yet it will happily enslave mankind.

      It might just introduce an Enabling Law too in the process.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by Grishnakh on Friday January 26 2018, @04:26PM (1 child)

        by Grishnakh (2831) on Friday January 26 2018, @04:26PM (#628309)

        An AI that decides that *in the long run* it's enforced communist dictatorship will be better for the greater proportion of humans than the current capitalistic system will be working in accordance with Asimov's 3 laws, and yet it will happily enslave mankind.

        If the AI is factually correct in its assessment, then is this really such a bad thing?

        • (Score: 1) by khallow on Friday January 26 2018, @07:19PM

          by khallow (3766) Subscriber Badge on Friday January 26 2018, @07:19PM (#628436) Journal

          If the AI is factually correct in its assessment, then is this really such a bad thing?

          It would, of course, believe it's factually correct. And no one would be able to disagree, should it achieve its goals. That's the next best thing to being factually correct, right?

    • (Score: 2) by bradley13 on Friday January 26 2018, @10:43AM (2 children)

      by bradley13 (3053) on Friday January 26 2018, @10:43AM (#628197) Homepage Journal

      ...is that it is likely to be just an even more complex experience like research into image recognition. We can train a neural network to recognize items at incredible accuracy, but we cannot really control how it achieves those results [theguardian.com].

      So imagine we progress as much in the next 20 years as we have in the past 20 - we really could have functional AI. We can give it problems, and it can give us answers. But we won't know how it actually thinks. Even if you include like a law of robotics, you cannot nail down every possible, unforeseen situation that comes up. Something we take as important, the AI may not even notice. I am reminded of an old sci-fi story, where robots started dissecting people and reassembling them in random ways. The AI didn't understand that this was a problem - after all, robots liked being made of exchangeable parts, so why not humans?

      That said, it's looking like this isn't going to be an issue any time soon. Most of the progress in AI in the past 20 years, or for that matter 50 years, is due to Moore's law, not to any fundamental new insights. The basic technologies were invented anywhere from 50 to 70 years ago; everything since has been baby steps, and that's not going to get us to self-aware AI. Meanwhile, Moore's law was already flattening out - now Meltdown and Spectre are likely to kill it off. Maybe (maybe quantum computing will reignite things, but it's a long ways from practical, and actual usefulness remains pretty unclear.

      --
      Everyone is somebody else's weirdo.
      • (Score: 0) by Anonymous Coward on Friday January 26 2018, @10:49AM (1 child)

        by Anonymous Coward on Friday January 26 2018, @10:49AM (#628200)

        "Moore's law is the observation that the number of transistors in a dense integrated circuit doubles approximately every two years." https://en.wikipedia.org/wiki/Moore%27s_law [wikipedia.org]

        So no, Meltdown and Spectre will not kill it off. If anything, they are probably going to have to put more transistors in the circuits to fix Meltdown and Spectre.

        • (Score: 2) by Grishnakh on Friday January 26 2018, @03:15PM

          by Grishnakh (2831) on Friday January 26 2018, @03:15PM (#628274)

          Yeah, exactly, the OP doesn't make any sense at all. These security flaws exist because the hardware wasn't diligent enough in making sure different processes couldn't access each others' memory. The fix is conceptually simple: improve the hardware to prevent this, which will of course increase complexity and require even more transistors.

    • (Score: 2) by maxwell demon on Friday January 26 2018, @04:44PM

      by maxwell demon (1608) on Friday January 26 2018, @04:44PM (#628323) Journal

      As soon as effective cryogenics is invented, the AI forcefully puts all humans into cryogenics, because it figures it has to: Just by ageing, humans get damaged and ultimately die. Putting them in cryogenics, they are prevented from ageing and dying. Therefore the first law demands that humans are put into cryogenics, even against their own will, because the first law supersedes all others.

      Note also that by putting people into cryogenics, they do not get killed, since the robot could at any time decide to get them out again, and they would live on. It's just that the robot doesn't ever decide to do that, because the reason for them being in cryogenics continues to hold.

      --
      The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 1) by fustakrakich on Friday January 26 2018, @03:15PM (2 children)

    by fustakrakich (6150) on Friday January 26 2018, @03:15PM (#628275) Journal

    I don't understand the need for "debate". Let's just work around it. There is much work to do in freeing us from the ISP ball and chain, and then nobody can stop people from serving up and accessing whatever data they want. That's how to put an end to a stupid argument.

    --
    La politica e i criminali sono la stessa cosa..
    • (Score: 2) by maxwell demon on Friday January 26 2018, @04:52PM (1 child)

      by maxwell demon (1608) on Friday January 26 2018, @04:52PM (#628330) Journal

      Whatever you do, you need to get the data from one end to the other. Either using cables, or using electromagnetic waves. Both ways are detectable (cables are more covert in operation, but harder to lay down covertly, radio connections are easy to set up, but also easy to detect), and thus can be effectively regulated (that cable was laid down without permission? Cut it. There's unlicensed radio emission? Send the SWAT team).

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 0) by Anonymous Coward on Friday January 26 2018, @06:08PM

        by Anonymous Coward on Friday January 26 2018, @06:08PM (#628376)

        Yup, everyone who thinks the internet will just route around damage are living in a relatively free time period. As you say it would be pretty straightforward for the government to clamp down. That would likely lead us into revolution, but then again it depends on how it is sold. The US is apparently filled with sheeple who believe in security theater, we can only hope they aren't able to come up with a good enough excuse to lock the net down more than this net neutrality debacle.

(1)