Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by VLM on Monday February 24 2014, @09:43PM

    by VLM (445) on Monday February 24 2014, @09:43PM (#6211)

    "Maybe more intelligent, and of course not having certain experiences (and having certain others that we don't have), but basically not entirely different from us."

    Picture 4chan /b/ distilled down to a hive mind, then imagine something a billion times weirder than /b/. Something that makes /b/ look like a bunch of conformist suburban neocon soccer moms in comparison.

    "However if the thinking of the AI remains alien to us, then it certainly won't get our sympathy, and we will always consider it something fundamentally different and potentially dangerous that we have to protect ourselves against."

    Told you so. Totally 4chan /b/ in a nutshell. Again, imagine something a billion times weirder yet.

    By analogy we don't even have to leave computers and the internet to find the "other", now imagine something not even based on the same biological hardware, not even the same species.

    I don't think there is any inherent reason to conclude there will be any cultural common ground, at all. Maybe the golden rule, maybe, but not much more.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by maxwell demon on Monday February 24 2014, @10:10PM

    by maxwell demon (1608) on Monday February 24 2014, @10:10PM (#6237) Journal

    I think the inherent reason will be that we built it, and we did so explicitly trying to build something "like us".

    --
    The Tao of math: The numbers you can count are not the real numbers.