Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday July 27 2017, @02:31AM   Printer-friendly
from the benign-benevolent-or-badass? dept.

There aren't many people in the world who can justifiably call Mark Zuckerberg a dumb-ass, but Elon Musk is probably one of them.

Early on Tuesday morning, in the latest salvo of a tussle between the two tech billionaires over the dangers of advanced artificial intelligence, Musk said that Zuckerberg's "understanding of the subject is limited."

I won't rehash the entire argument here, but basically Elon Musk has been warning society for the last few years that we need to be careful of advanced artificial intelligence. Musk is concerned that humans will either become second-class citizens under super-smart AIs, or alternatively that we'll face a Skynet-like scenario against a robot uprising.

Zuckerberg, on the other hand, is weary of fear-mongering around futuristic technology. "I have pretty strong opinions on this. I am optimistic," Zuckerberg said during a Facebook Live broadcast on Sunday. "And I think people who are naysayers and try to drum up these doomsday scenarios... I just don't understand it. It's really negative and in some ways I think it is pretty irresponsible."

Then, responding to Zuckerberg's "pretty irresponsible" remark, Musk said on Twitter: "I've talked to Mark about this. His understanding of the subject is limited."

Two geeks enter, one geek leaves. That is the law of Bartertown.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday July 27 2017, @11:19AM (4 children)

    by Anonymous Coward on Thursday July 27 2017, @11:19AM (#545107)

    And which is that even greater "xI" that will be the judge of "even better"? Mind you, we as a species and as a community of communicating sentient minds are not hoarding intelligence, but the knowledge - prepackaged solutions, or building blocks for solutions, to recurring problems. With AI left on its own to sharpen its abilities, we are creating something we are unable to analyze and understand how it makes decisions. We gain zero transferable knowledge - I think that is the main problem with AI hype: building essentially a cult around magical black boxes and relying on them.

    Will AI outsmart us? Well, let's put it this way: have machines been overwhelming us with strength and power for centuries? Yes, but machines don't have a will of their own. They are another extension of our will.
    Only dangerous AI would be the one trained by purpose to be adversarial to all humans. A lot of effort and focus on that goal is needed to achieve that. So, the enemy is us, once again, if any.
    Why would we force personality of an antisocial human being onto an AI, especially on one having direct control over some real potential for damage?

    Are we smarter then bugs? No doubt, but why are we unable to keep them out of places we don't want them to be? The point is, you don't need to be smarter than your smartest foe to avoid extinction.

  • (Score: 2) by cafebabe on Thursday July 27 2017, @01:52PM (3 children)

    by cafebabe (894) on Thursday July 27 2017, @01:52PM (#545166) Journal

    Only dangerous AI would be the one trained by purpose to be adversarial to all humans.

    I presume that you've never watched the film: 2001: A Space Odessy or the film: Colossus: The Forbin Project.

    Both cases cover Artificial Intelligence where there was no intentional malice from humans. In both cases, the Artificial Intelligence works its way through its axioms and then performs unintended actions. Anyone who partially understands Gödel's incompleteness theorem [wikipedia.org] should be highly alarmed by this scenario because we cannot define orthogonal axioms for algebra or Euclidean geometry.

    Feel free to kill yourself with a more complicated set of axioms but please leave me out of your mis-adventure.

    --
    1702845791×2
    • (Score: 2) by kaszz on Sunday July 30 2017, @03:50PM (2 children)

      by kaszz (4211) on Sunday July 30 2017, @03:50PM (#546709) Journal

      Gödel's incompleteness theorem would then mean that any axiom with basic arithmetic based system can't handle all the input it needs too. Maybe none basic arithmetic could? Anyway it seems to point that AI can't be designed to be truly deterministic. And having a system that makes life or death decisions in a non-deterministic way seems like a really bad idea.

      Somehow this seems to point to that the reality itself is inconsistent. An oxymoron if you will.

      • (Score: 2) by cafebabe on Tuesday August 01 2017, @04:47PM (1 child)

        by cafebabe (894) on Tuesday August 01 2017, @04:47PM (#547703) Journal

        Trivial mathematics work consistently but then you'll be stuck in the Turing tarpit where anything is possible but anything of merit is astounding difficult. Anything of the complexity of infix notation with precedence brackets (Gödel's example) is beyond the realm of orthogonal axioms.

        During a previous discussion about Elon Musk's warnings about Artificial Intelligence [soylentnews.org], I argued that a practical computer with two interrupts [soylentnews.org] was sufficiently complex to exhibit non-determinism. The argument is quite worrying. A deterministic system plus a deterministic system results in a deterministic system. A deterministic system plus a non-deterministic system results in a non-deterministic system. A computer with I/O is (at best) a deterministic system connected to a non-deterministic universe.

        I understand your assertion that reality is inconsistent. If one part is nailed down and smoothed out then the remainder stubbornly refuses to follow suit. I have a theory that non-determinism does not necessarily have a random distribution and may provide scope for free will. A friend has a theory that attempts to catalog and record increasing amounts of reality leads to an increasing amount of inexplicable weirdness. These may be corollaries of observer entanglement but what is an observer? The universe observing itself? Is that the oxymoron?

        --
        1702845791×2
        • (Score: 2) by kaszz on Tuesday August 01 2017, @11:55PM

          by kaszz (4211) on Tuesday August 01 2017, @11:55PM (#547785) Journal

          Regarding self observation. It might be as simple as that you can lift someone else in the hair and make them get above the ground. That will not work on oneself. Thus anyone within the system (universe) can't fully deal with the system because they are themselves part of it.

          A deterministic system plus a deterministic system results in a deterministic system. A deterministic system plus a non-deterministic system results in a non-deterministic system.

          That reminds me a lot of even and odd number multiplication and the determination if it will result in a odd or even number.

          I understand your assertion that reality is inconsistent. If one part is nailed down and smoothed out then the remainder stubbornly refuses to follow suit. I have a theory that non-determinism does not necessarily have a random distribution and may provide scope for free will.

          I think the collective understanding of the universe is only partial (patches) and there is a lot to be discovered. For one, how many exceptions do the standard model have? it's convenient but maybe there is a more complex model that would reflect reality better.
          Experimental data that seems to be inconsistent and a oxymoron might point in the direction of new connections between seemingly incompatible phenomena.

          Btw, I find the EMdrive fascinating. It's like a big red arrow that some models are really incomplete.

          I just had a thought earlier today. If gravity of earth decreases with the square of the distance (F=(G*m1*m2)/r²) then it ought to follow that gravity is only really dominant in the near range. So for anyone standing on the ground. Only a quite thin layer of mass will actually make up majority of the gravitational force. If the majority of the mass in the center were to disappear and only leave a relative thin layer around for people to stand on. Then of course it would follow a decrease significantly according to the formula. But theoretically only the nearest layer should really be close enough to have any influence.
          Could it be that mass have some kind of chain influence such that mass that is really too far away to have any influence still does so through interchanged force carriers that are involved in some kind of self enforcing chain reaction?