Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday December 03 2014, @12:36AM   Printer-friendly
from the The-cybersapiens-are-coming! The-cybersapiens-are-coming! dept.

The BBC is reporting that Stephen Hawking warns artificial intelligence could end mankind:

Prof Stephen Hawking, one of Britain's pre-eminent scientists, has said that efforts to create thinking machines pose a threat to our very existence. He told the BBC: "The development of full artificial intelligence could spell the end of the human race."

It seems he is mostly concerned about building machines smarter than we are:

Prof Hawking says the primitive forms of artificial intelligence developed so far have already proved very useful, but he fears the consequences of creating something that can match or surpass humans.

This seems to echo Elon Musk's fears. What do you think?

Since Elon Musk said the same[*], some here have disparaged the statement. Stephen Hawking, however, has more street cred[ibility] than Musk. Are they right, or will other factors precede AI as catastrophic scenarios?

[* Ed's note. See: Elon Musk scared of Artificial Intelligence - Again.]

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Wednesday December 03 2014, @03:35PM

    by khallow (3766) Subscriber Badge on Wednesday December 03 2014, @03:35PM (#122274) Journal

    There is a problem that our culture pushes sociopaths and psychopaths into positions of leadership

    Compared to who? My view is that sociopaths and such exist in the first place because there were such "positions of leadership" for at least the past few thousand years. Second, why is there this assumption that such behavior isn't normal, human behavior when presented with power over others? Keep in mind the oft-repeated cliche of the person who becomes "corrupted" by wealth or power.

    So I'm sure if we birth an AI it'll automatically go on a killing spree wiping out its "parents" because obviously all animal children do that 100% across all species.

    We wouldn't be its "species" and by creating such an AI you start by throwing away that particular rulebook.

    Fourthly combine that the fact of "github like development models" and "prod/dev/test deployment" vs the non-technical assumption that there can be only one.

    If you pile a bunch of firewood and light it in numerous places, do you assume it'll stay numerous independent blazes? Humanity has created a vast pile of firewood, humans resorting to their own devices and their own devices running very inefficient programs for relatively simple tasks (like showing pretty images on a screen). One of the concerns here is that a single AI might be able to take over that entire thing like a fire consuming a pile of wood.