Stories
Slash Boxes
Comments

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by maxwell demon on Monday February 24 2014, @08:58PM

    by maxwell demon (1608) on Monday February 24 2014, @08:58PM (#6167) Journal

    Actually, it's not quite as simple. With humans, something new has entered the evolution: We are now able to pass not only genetic information to our offspring, but also to pass information from one brain to another. That is, humans not only have genes, they also have memes. And the memes are no less selfish than the genes. When people are willing to die for their ideals, and taking themselves out of the gene pool by doing so, it's a case of the memes having won over the genes.

    Therefore whether we will accept the AI depends very much on how similar it is to our mind, that is, how well it will be able to carry and pass on our memes. Of course we will not think that way. We will notice that the AI understands us and we understand the AI. We will be able to relate to the AI, to be friends with it, to share thoughts with it. We will be able to accept the AI as long as we have the impression that it is "just like us". Maybe more intelligent, and of course not having certain experiences (and having certain others that we don't have), but basically not entirely different from us.

    If we manage to build such an AI, I guess over time it will get widespread acceptance, and I can even imagine that many people would accept the idea that they will eventually replace us (as long as that replacement doesn't happen in a violent way). However if the thinking of the AI remains alien to us, then it certainly won't get our sympathy, and we will always consider it something fundamentally different and potentially dangerous that we have to protect ourselves against.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by VLM on Monday February 24 2014, @09:43PM

    by VLM (445) Subscriber Badge on Monday February 24 2014, @09:43PM (#6211)

    "Maybe more intelligent, and of course not having certain experiences (and having certain others that we don't have), but basically not entirely different from us."

    Picture 4chan /b/ distilled down to a hive mind, then imagine something a billion times weirder than /b/. Something that makes /b/ look like a bunch of conformist suburban neocon soccer moms in comparison.

    "However if the thinking of the AI remains alien to us, then it certainly won't get our sympathy, and we will always consider it something fundamentally different and potentially dangerous that we have to protect ourselves against."

    Told you so. Totally 4chan /b/ in a nutshell. Again, imagine something a billion times weirder yet.

    By analogy we don't even have to leave computers and the internet to find the "other", now imagine something not even based on the same biological hardware, not even the same species.

    I don't think there is any inherent reason to conclude there will be any cultural common ground, at all. Maybe the golden rule, maybe, but not much more.

    • (Score: 2) by maxwell demon on Monday February 24 2014, @10:10PM

      by maxwell demon (1608) on Monday February 24 2014, @10:10PM (#6237) Journal

      I think the inherent reason will be that we built it, and we did so explicitly trying to build something "like us".

      --
      The Tao of math: The numbers you can count are not the real numbers.