Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Insightful) by HiThere on Monday February 24 2014, @08:47PM

    by HiThere (866) Subscriber Badge on Monday February 24 2014, @08:47PM (#6158) Journal

    Why would it want to?

    If it wants to change it's emotional reaction to the world and it's contents, then you've built it wrong.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   2  
  • (Score: 1) by sar on Wednesday February 26 2014, @06:25PM

    by sar (507) on Wednesday February 26 2014, @06:25PM (#7450)

    So imagine you built it wrong. Even if this is small probability like 5% or less do you want to risk it? To create something super intelligent capable to copy itself quickly?
    Wouldn't it be much better to augment our capabilities instead of risking creation of potentially extremely deadly foe?

    And you can even make it correctly but some malfunction or some iteration of design could disable this safe mechanism in future. Is it worth it?

    And why? It could be out of curiosity or it will be bored. Or it will calculate that we hinder its evolution. Who knows now. You simply can not be 100% percent sure it will not go out of control. And if it goes, we are simply doomed.