Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday July 27 2017, @02:31AM   Printer-friendly
from the benign-benevolent-or-badass? dept.

There aren't many people in the world who can justifiably call Mark Zuckerberg a dumb-ass, but Elon Musk is probably one of them.

Early on Tuesday morning, in the latest salvo of a tussle between the two tech billionaires over the dangers of advanced artificial intelligence, Musk said that Zuckerberg's "understanding of the subject is limited."

I won't rehash the entire argument here, but basically Elon Musk has been warning society for the last few years that we need to be careful of advanced artificial intelligence. Musk is concerned that humans will either become second-class citizens under super-smart AIs, or alternatively that we'll face a Skynet-like scenario against a robot uprising.

Zuckerberg, on the other hand, is weary of fear-mongering around futuristic technology. "I have pretty strong opinions on this. I am optimistic," Zuckerberg said during a Facebook Live broadcast on Sunday. "And I think people who are naysayers and try to drum up these doomsday scenarios... I just don't understand it. It's really negative and in some ways I think it is pretty irresponsible."

Then, responding to Zuckerberg's "pretty irresponsible" remark, Musk said on Twitter: "I've talked to Mark about this. His understanding of the subject is limited."

Two geeks enter, one geek leaves. That is the law of Bartertown.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Thursday July 27 2017, @04:55AM (1 child)

    by khallow (3766) Subscriber Badge on Thursday July 27 2017, @04:55AM (#545011) Journal

    Can't we agree "Thou shalt not make a machine in the likeness of a human mind" and avoid the whole risk of Skynet?

    No.

    Wouldn't it be better to debate theory, potential problems, possible safeguards, etc. for a few decades AFTER we cross the line were we COULD build an AI and decide whether we SHOULD?

    What would be the point? As I see it, you've just wasted a few decades without coming up with any understanding of the problem.

  • (Score: 2) by jmorris on Thursday July 27 2017, @05:25AM

    by jmorris (4844) on Thursday July 27 2017, @05:25AM (#545016)

    Well until we know what form of AI is nearing a breakthrough it is hard to intelligently assess the possibilities and thus the risks and rewards. If AI is looking likely to emerge as pure software (Watson taken up a lot of notches) it is very different from a "positronic brain" scenario where an artificial brain like machine is being considered. Uploaded human neural patterns are again a different problem.

    The big risk of course is a unexpected emergence from ever increasing complexity in "Watson" like systems, one day one of them is asked the wrong question and "kill all humans" is the answer.