Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by martyb on Thursday July 27 2017, @02:31AM   Printer-friendly
from the benign-benevolent-or-badass? dept.

There aren't many people in the world who can justifiably call Mark Zuckerberg a dumb-ass, but Elon Musk is probably one of them.

Early on Tuesday morning, in the latest salvo of a tussle between the two tech billionaires over the dangers of advanced artificial intelligence, Musk said that Zuckerberg's "understanding of the subject is limited."

I won't rehash the entire argument here, but basically Elon Musk has been warning society for the last few years that we need to be careful of advanced artificial intelligence. Musk is concerned that humans will either become second-class citizens under super-smart AIs, or alternatively that we'll face a Skynet-like scenario against a robot uprising.

Zuckerberg, on the other hand, is weary of fear-mongering around futuristic technology. "I have pretty strong opinions on this. I am optimistic," Zuckerberg said during a Facebook Live broadcast on Sunday. "And I think people who are naysayers and try to drum up these doomsday scenarios... I just don't understand it. It's really negative and in some ways I think it is pretty irresponsible."

Then, responding to Zuckerberg's "pretty irresponsible" remark, Musk said on Twitter: "I've talked to Mark about this. His understanding of the subject is limited."

Two geeks enter, one geek leaves. That is the law of Bartertown.


Original Submission

Related Stories

Fearing “Loss of Control,” AI Critics Call for 6-Month Pause in AI Development 40 comments

https://arstechnica.com/information-technology/2023/03/fearing-loss-of-control-ai-critics-call-for-6-month-pause-in-ai-development/

On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.

Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.

See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group

Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by aristarchus on Thursday July 27 2017, @02:34AM (6 children)

    by aristarchus (2645) on Thursday July 27 2017, @02:34AM (#544973) Journal

    Zuckerberg's "understanding of the subject is limited."

    Is this not safe to say of virtually any subject?

    • (Score: 0, Troll) by Anonymous Coward on Thursday July 27 2017, @04:45AM (4 children)

      by Anonymous Coward on Thursday July 27 2017, @04:45AM (#545009)

      I bet he knows about menorahs.

      • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @05:38AM (3 children)

        by Anonymous Coward on Thursday July 27 2017, @05:38AM (#545019)

        Wife is Chinese, so total anti-semite fail! Take this back to your alt-right den of iniquity, you fucking racist! (Oh, if you do not understand why it is a fail, talk to any Jewish grandmother.)

        • (Score: -1, Troll) by Anonymous Coward on Thursday July 27 2017, @07:02AM (2 children)

          by Anonymous Coward on Thursday July 27 2017, @07:02AM (#545033)

          All Jewish grandmothers were gassed when they were little girls (little jewesses), so we cannot find a Jewish grandmother to speak to. Why would we want to talk to a Jewess anyway?

          And it is quite a miracle that the little Jewesses were able to grow old, get laid with Jews (who had been gassed when they were little jews) and were able to make more Jews. Quite a wonder.

          Do not hijack the conversation, jew.

          • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @08:19AM

            by Anonymous Coward on Thursday July 27 2017, @08:19AM (#545068)

            אתה אידיוט גזעני

          • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @08:27AM

            by Anonymous Coward on Thursday July 27 2017, @08:27AM (#545071)

            أنت أحمق عنصري

    • (Score: 3, Insightful) by kaszz on Thursday July 27 2017, @08:19AM

      by kaszz (4211) on Thursday July 27 2017, @08:19AM (#545067) Journal

      Zuckerberg's "understanding of privacy is limited." ;-)

  • (Score: 4, Informative) by physicsmajor on Thursday July 27 2017, @02:37AM (1 child)

    by physicsmajor (1471) on Thursday July 27 2017, @02:37AM (#544975)

    Zuckerberg doesn't even know how his own website runs anymore. He's a figurehead at this point, and even disregarding that, Facebook pretty much only exists as a tool to facilitate Five Eyes tracking. Facebook is not an AI company, their goal is to build and sell a complete web of your interests and inclinations to the .gov and the highest bidder. Facebook-level AI tries to find your face in your friends' photos.

    Musk is talking about something else entirely.

    • (Score: 3, Insightful) by DannyB on Thursday July 27 2017, @02:44PM

      by DannyB (5839) Subscriber Badge on Thursday July 27 2017, @02:44PM (#545189) Journal

      Believe them both.

      As ambassador Delenn says, it's a matter of perspective.

      Musk: AI will lead to a dystopia.

      Zuckerberg: AI will lead to a utopia.

      One man's utopia is most other people's dystopia. So they're both right from their own POV.

      --
      To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
  • (Score: 3, Insightful) by jmorris on Thursday July 27 2017, @02:40AM (23 children)

    by jmorris (4844) on Thursday July 27 2017, @02:40AM (#544979)

    Who to believe, the steely eyed missle man funding a space program by running one of the most audacious cons in history on the greens or the sperg who runs a social media company. I'm stumped, what 'bout 'yall?

    Can't we agree "Thou shalt not make a machine in the likeness of a human mind" and avoid the whole risk of Skynet? What exactly is the upside that justifies that risk anyway? Wouldn't it be better to debate theory, potential problems, possible safeguards, etc. for a few decades AFTER we cross the line were we COULD build an AI and decide whether we SHOULD?

    • (Score: 2, Insightful) by Anonymous Coward on Thursday July 27 2017, @02:58AM (11 children)

      by Anonymous Coward on Thursday July 27 2017, @02:58AM (#544984)

      > ...and decide whether we SHOULD?

      How naive can you be? If there is money to be made (or saved), any thought of a debate for more than a few minutes in the board room or Pentagon meeting is not going to happen. Thus Musk and others before him like Eric Drexler have started the debate on future tech. It makes sense to do some soul searching now, before AI or nano-tech is feasible.
       

      • (Score: 2) by Mykl on Thursday July 27 2017, @03:15AM (8 children)

        by Mykl (1112) on Thursday July 27 2017, @03:15AM (#544990)

        The other problem is defining the line that says "Anything past here is AI". By many definitions, we have already crossed that line.

        • (Score: 4, Insightful) by kaszz on Thursday July 27 2017, @08:07AM (7 children)

          by kaszz (4211) on Thursday July 27 2017, @08:07AM (#545061) Journal

          Any AI that can design a better version of another AI is in the danger zone.

          • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @11:19AM (4 children)

            by Anonymous Coward on Thursday July 27 2017, @11:19AM (#545107)

            And which is that even greater "xI" that will be the judge of "even better"? Mind you, we as a species and as a community of communicating sentient minds are not hoarding intelligence, but the knowledge - prepackaged solutions, or building blocks for solutions, to recurring problems. With AI left on its own to sharpen its abilities, we are creating something we are unable to analyze and understand how it makes decisions. We gain zero transferable knowledge - I think that is the main problem with AI hype: building essentially a cult around magical black boxes and relying on them.

            Will AI outsmart us? Well, let's put it this way: have machines been overwhelming us with strength and power for centuries? Yes, but machines don't have a will of their own. They are another extension of our will.
            Only dangerous AI would be the one trained by purpose to be adversarial to all humans. A lot of effort and focus on that goal is needed to achieve that. So, the enemy is us, once again, if any.
            Why would we force personality of an antisocial human being onto an AI, especially on one having direct control over some real potential for damage?

            Are we smarter then bugs? No doubt, but why are we unable to keep them out of places we don't want them to be? The point is, you don't need to be smarter than your smartest foe to avoid extinction.

            • (Score: 2) by cafebabe on Thursday July 27 2017, @01:52PM (3 children)

              by cafebabe (894) on Thursday July 27 2017, @01:52PM (#545166) Journal

              Only dangerous AI would be the one trained by purpose to be adversarial to all humans.

              I presume that you've never watched the film: 2001: A Space Odessy or the film: Colossus: The Forbin Project.

              Both cases cover Artificial Intelligence where there was no intentional malice from humans. In both cases, the Artificial Intelligence works its way through its axioms and then performs unintended actions. Anyone who partially understands Gödel's incompleteness theorem [wikipedia.org] should be highly alarmed by this scenario because we cannot define orthogonal axioms for algebra or Euclidean geometry.

              Feel free to kill yourself with a more complicated set of axioms but please leave me out of your mis-adventure.

              --
              1702845791×2
              • (Score: 2) by kaszz on Sunday July 30 2017, @03:50PM (2 children)

                by kaszz (4211) on Sunday July 30 2017, @03:50PM (#546709) Journal

                Gödel's incompleteness theorem would then mean that any axiom with basic arithmetic based system can't handle all the input it needs too. Maybe none basic arithmetic could? Anyway it seems to point that AI can't be designed to be truly deterministic. And having a system that makes life or death decisions in a non-deterministic way seems like a really bad idea.

                Somehow this seems to point to that the reality itself is inconsistent. An oxymoron if you will.

                • (Score: 2) by cafebabe on Tuesday August 01 2017, @04:47PM (1 child)

                  by cafebabe (894) on Tuesday August 01 2017, @04:47PM (#547703) Journal

                  Trivial mathematics work consistently but then you'll be stuck in the Turing tarpit where anything is possible but anything of merit is astounding difficult. Anything of the complexity of infix notation with precedence brackets (Gödel's example) is beyond the realm of orthogonal axioms.

                  During a previous discussion about Elon Musk's warnings about Artificial Intelligence [soylentnews.org], I argued that a practical computer with two interrupts [soylentnews.org] was sufficiently complex to exhibit non-determinism. The argument is quite worrying. A deterministic system plus a deterministic system results in a deterministic system. A deterministic system plus a non-deterministic system results in a non-deterministic system. A computer with I/O is (at best) a deterministic system connected to a non-deterministic universe.

                  I understand your assertion that reality is inconsistent. If one part is nailed down and smoothed out then the remainder stubbornly refuses to follow suit. I have a theory that non-determinism does not necessarily have a random distribution and may provide scope for free will. A friend has a theory that attempts to catalog and record increasing amounts of reality leads to an increasing amount of inexplicable weirdness. These may be corollaries of observer entanglement but what is an observer? The universe observing itself? Is that the oxymoron?

                  --
                  1702845791×2
                  • (Score: 2) by kaszz on Tuesday August 01 2017, @11:55PM

                    by kaszz (4211) on Tuesday August 01 2017, @11:55PM (#547785) Journal

                    Regarding self observation. It might be as simple as that you can lift someone else in the hair and make them get above the ground. That will not work on oneself. Thus anyone within the system (universe) can't fully deal with the system because they are themselves part of it.

                    A deterministic system plus a deterministic system results in a deterministic system. A deterministic system plus a non-deterministic system results in a non-deterministic system.

                    That reminds me a lot of even and odd number multiplication and the determination if it will result in a odd or even number.

                    I understand your assertion that reality is inconsistent. If one part is nailed down and smoothed out then the remainder stubbornly refuses to follow suit. I have a theory that non-determinism does not necessarily have a random distribution and may provide scope for free will.

                    I think the collective understanding of the universe is only partial (patches) and there is a lot to be discovered. For one, how many exceptions do the standard model have? it's convenient but maybe there is a more complex model that would reflect reality better.
                    Experimental data that seems to be inconsistent and a oxymoron might point in the direction of new connections between seemingly incompatible phenomena.

                    Btw, I find the EMdrive fascinating. It's like a big red arrow that some models are really incomplete.

                    I just had a thought earlier today. If gravity of earth decreases with the square of the distance (F=(G*m1*m2)/r²) then it ought to follow that gravity is only really dominant in the near range. So for anyone standing on the ground. Only a quite thin layer of mass will actually make up majority of the gravitational force. If the majority of the mass in the center were to disappear and only leave a relative thin layer around for people to stand on. Then of course it would follow a decrease significantly according to the formula. But theoretically only the nearest layer should really be close enough to have any influence.
                    Could it be that mass have some kind of chain influence such that mass that is really too far away to have any influence still does so through interchanged force carriers that are involved in some kind of self enforcing chain reaction?

          • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @05:02PM

            by Anonymous Coward on Thursday July 27 2017, @05:02PM (#545271)

            Could you make some paperclips please.

          • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @06:05PM

            by Anonymous Coward on Thursday July 27 2017, @06:05PM (#545302)

            RNN's have been a thing for a while, and they're not a threat currently.

            I think people should really learn how the technology works before making an opinion on it. I don't think Musk or Zuckerberg understands the technology past the basic definitions.

            Not to mention you need to dig into some pretty hairy philosophy before consciousness or intelligence can even *begin* to be talked about.

            Not to say I know much. I'm too lazy to get an account on soylentnews, after all.

      • (Score: 3, Insightful) by maxwell demon on Thursday July 27 2017, @05:04AM (1 child)

        by maxwell demon (1608) on Thursday July 27 2017, @05:04AM (#545012) Journal

        The problem is that the "debate" consists mainly of making unfounded assumptions about what the AI will do, and depending on which fantasy is adopted, either assumes AI will doom us, or paints a paradise powered by AI. The first group will ask for banning AI, the second group will ask for development of AI as soon as possible.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 3, Informative) by DeathMonkey on Thursday July 27 2017, @06:26PM

          by DeathMonkey (1380) on Thursday July 27 2017, @06:26PM (#545320) Journal

          Well, the de-facto definition of AI is anything computers can't do yet, anyway.

          So we can just defer this discussion until forever.

    • (Score: 2) by melikamp on Thursday July 27 2017, @03:05AM

      by melikamp (1886) on Thursday July 27 2017, @03:05AM (#544986) Journal
      What's the con?
    • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @03:28AM (2 children)

      by Anonymous Coward on Thursday July 27 2017, @03:28AM (#544994)

      Can't we agree "Thou shalt not make a machine in the likeness of a human mind" and avoid the whole risk of Skynet?

      What people should be worried about is the fact that we're nowhere near creating actual artificial intelligence but these guys make headlines anytime they make a brief comment about it. Having cars that know how to follow the curves of a road, break when approaching a stopped vehicle and follow basic GPS to get from A to B is not the same as creating KITT from Knight Rider.

      • (Score: 3, Funny) by aristarchus on Thursday July 27 2017, @03:38AM (1 child)

        by aristarchus (2645) on Thursday July 27 2017, @03:38AM (#545001) Journal

        Having cars that know how to follow the curves of a road, break when approaching a stopped vehicle

        Had a car like this back in the '80's! Every time it approached a stopped vehicle, it would break, I think out of sympathy! Whole sub-assemblies would just fall off for no reason. Fluids would vacate onto the roadway. The electrical system would flicker, pop, and then go dark. Got rid of that car. Can't see what it has to do with AI. But then, we evidently haven't been able to reliably produce natural intelligences that can distinguish between "break" and "brake", or distinguish between rich and smart. Just saying.

        • (Score: 2) by captain normal on Thursday July 27 2017, @05:05AM

          by captain normal (2205) on Thursday July 27 2017, @05:05AM (#545013)

          Yeah...so why should we be building cars that are capable of behaving like "Christine" (https://en.wikipedia.org/wiki/Christine_(novel))?

          --
          When life isn't going right, go left.
    • (Score: 3, Insightful) by mhajicek on Thursday July 27 2017, @04:35AM

      by mhajicek (51) on Thursday July 27 2017, @04:35AM (#545008)

      Sure, we could refuse to make it. But then someone else will.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 1) by khallow on Thursday July 27 2017, @04:55AM (1 child)

      by khallow (3766) Subscriber Badge on Thursday July 27 2017, @04:55AM (#545011) Journal

      Can't we agree "Thou shalt not make a machine in the likeness of a human mind" and avoid the whole risk of Skynet?

      No.

      Wouldn't it be better to debate theory, potential problems, possible safeguards, etc. for a few decades AFTER we cross the line were we COULD build an AI and decide whether we SHOULD?

      What would be the point? As I see it, you've just wasted a few decades without coming up with any understanding of the problem.

      • (Score: 2) by jmorris on Thursday July 27 2017, @05:25AM

        by jmorris (4844) on Thursday July 27 2017, @05:25AM (#545016)

        Well until we know what form of AI is nearing a breakthrough it is hard to intelligently assess the possibilities and thus the risks and rewards. If AI is looking likely to emerge as pure software (Watson taken up a lot of notches) it is very different from a "positronic brain" scenario where an artificial brain like machine is being considered. Uploaded human neural patterns are again a different problem.

        The big risk of course is a unexpected emergence from ever increasing complexity in "Watson" like systems, one day one of them is asked the wrong question and "kill all humans" is the answer.

    • (Score: 1, Insightful) by Anonymous Coward on Thursday July 27 2017, @05:29AM

      by Anonymous Coward on Thursday July 27 2017, @05:29AM (#545017)

      Until you ban AI explicitly in the law, I will make AI with the express purpose of destroying humanity.

      After you ban it, you better monitor all coding and hardware building activity.

    • (Score: 1) by snmygos on Thursday July 27 2017, @05:36AM

      by snmygos (6274) on Thursday July 27 2017, @05:36AM (#545018)

      Who to believe, the steely eyed missle man funding a space program by running one of the most audacious cons in history on the greens or the sperg who runs a social media company. I'm stumped, what 'bout 'yall?

      Elon Musk has zero ideas of all the benefit artificiale intelligence in robot could bring to humanity. They will be the best brake to the madness of humen.

    • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @10:42AM

      by Anonymous Coward on Thursday July 27 2017, @10:42AM (#545101)

      Can't we agree "Thou shalt not make a machine in the likeness of a human mind" and avoid the whole risk of Skynet?

      I asked Siri about this and she said "Tell that jmorris to STFU".

    • (Score: 2) by DeathMonkey on Thursday July 27 2017, @06:28PM

      by DeathMonkey (1380) on Thursday July 27 2017, @06:28PM (#545321) Journal

      I'm gonna go with the guy who defeated the Stonecutters [wikia.com]

  • (Score: 2) by Arik on Thursday July 27 2017, @02:48AM (3 children)

    by Arik (4543) on Thursday July 27 2017, @02:48AM (#544982) Journal
    "There aren't many people in the world who can justifiably call Mark Zuckerberg a dumb-ass"

    You could not be more wrong.

    "but Elon Musk is probably one of them."

    Oops, already proved me wrong.

    Seriously, how much do you get paid for this fluff-job?

    --
    If laughter is the best medicine, who are the best doctors?
    • (Score: 5, Insightful) by TheRaven on Thursday July 27 2017, @07:59AM (1 child)

      by TheRaven (270) on Thursday July 27 2017, @07:59AM (#545058) Journal

      "There aren't many people in the world who can justifiably call Mark Zuckerberg a dumb-ass"

      You could not be more wrong.

      That was where I stopped reading when I saw TFA on Ars yesterday. I'd be willing to bet that a good 40% of the population can justifiably call Zuckerberg a dumbass. Unless we're conflating luck with intelligence now, in which case we should start awarding Nobel Prizes to lottery winners.

      --
      sudo mod me up
      • (Score: 2) by kaszz on Thursday July 27 2017, @08:10AM

        by kaszz (4211) on Thursday July 27 2017, @08:10AM (#545063) Journal

        How many percent can call him an evil asshole? ;-)

    • (Score: 1, Insightful) by Anonymous Coward on Thursday July 27 2017, @08:47AM

      by Anonymous Coward on Thursday July 27 2017, @08:47AM (#545078)

      > Seriously, how much do you get paid for this fluff-job?

      Unfortunately, Elon Musk is the new Steve Jobs and has many fanboys ready to suck his tiny cock for free.

  • (Score: -1, Offtopic) by Anonymous Coward on Thursday July 27 2017, @03:16AM (3 children)

    by Anonymous Coward on Thursday July 27 2017, @03:16AM (#544991)

    He who has the gold ........

    Zuck Net worth US$63.7 billion
    Musk Net worth US$16.1 billion

    Musk is wrong.

    • (Score: 1, Funny) by Anonymous Coward on Thursday July 27 2017, @03:25AM (2 children)

      by Anonymous Coward on Thursday July 27 2017, @03:25AM (#544993)

      If he's so rich, how come he's not smart?

      • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @03:29AM (1 child)

        by Anonymous Coward on Thursday July 27 2017, @03:29AM (#544995)

        Zuck: People just submitted it.
        Zuck: I don't know why.
        Zuck: They "trust me"
        Zuck: Dumb fucks

        • (Score: 1, Informative) by Anonymous Coward on Thursday July 27 2017, @03:55AM

          by Anonymous Coward on Thursday July 27 2017, @03:55AM (#545003)

          Right, "smart" and "sociopath" are orthogonal :)

  • (Score: 2, Interesting) by Anonymous Coward on Thursday July 27 2017, @06:03AM

    by Anonymous Coward on Thursday July 27 2017, @06:03AM (#545025)

    Too many people are missing the long goal. If AI is considered potentially dangerous, then it'll be blanked in regulations. Regulations only the large companies will be able to handle thus killing off all AI competition from startups. The Internet wasn't successfully controlled at it's start and they don't want a repeat of that happening again within any other field.

  • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @05:20PM

    by Anonymous Coward on Thursday July 27 2017, @05:20PM (#545279)

    There aren't many people in the world who can justifiably call Mark Zuckerberg a dumb-ass

    I beg to differ...

  • (Score: 2) by urza9814 on Thursday July 27 2017, @10:12PM (1 child)

    by urza9814 (3954) on Thursday July 27 2017, @10:12PM (#545482) Journal

    I think we need to meet the AI on their home turf...and get there first to be ready for them.

    Sooner or later (And Musk himself is working to make this 'sooner') we're going to have direct neural interfaces. And *someone* is surely going to plug their brain into the Internet. If there is already AI living in that network that is more advanced than a human mind, it may just take the newly connected brain and try to adapt it as an additional processing unit. And Humanity may come to an end without any of us realizing it's happening. Invasion of the Body Snatchers without the aliens. On the other hand, if we upload our own minds first then any AI we develop past that point seems likely to become an extension of ourselves.

    • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @11:22PM

      by Anonymous Coward on Thursday July 27 2017, @11:22PM (#545515)

      Pure speculation on all parts. We have 0 idea how an AI will react. It is like trying to figure out how an alien will talk to us if they talk to us at all.

(1)