Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 13 submissions in the queue.
posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by girlwhowaspluggedout on Monday February 24 2014, @06:14AM

    by girlwhowaspluggedout (1223) on Monday February 24 2014, @06:14AM (#5597)

    That way we can just invoke Betteridge's no and get it over with.

    Since the day the Temple was destroyed, prophecy has been taken from prophets and given to fools, Ray Kurzweil, and children. (Babylonian Talmud, Bava Batra 12b)

    --
    Soylent is the best disinfectant.
    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Interesting=2, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2, Funny) by Dopefish on Monday February 24 2014, @07:06AM

    by Dopefish (12) on Monday February 24 2014, @07:06AM (#5631)

    Darn it woman. You know I can't break the rules as an editor, or I'll be heckled by trolls until next week! I can't win, can I? :p

  • (Score: 3, Insightful) by Darth Turbogeek on Monday February 24 2014, @11:36AM

    by Darth Turbogeek (1073) on Monday February 24 2014, @11:36AM (#5748)

    Well, the predictor might be waaaaay over the top, but I always thought Google's real aim was to do an AI by brute force.

    The reason why is that a searh engine is the best way to gather the data to bootstrap an AI engine - it needs knowledge? Well, it's got it. Huge computer power? Sure! Programmers to work out the algorythm to do the initial AI boot sequence and the understanding how to write the same so as to lace seemingly unrelated info together? Bingo.

    It seems to me that most AI proponents start from the wrong base and fail to realise that the real trick to intelligence is the lacing of unrelated info to come up with something new (well at least that to me is the idea of intelligence, others will disagree). Google seems to me to have gone about it the right way, build a massive databank, build the search programs, build a way to lace it all together, build a hugely powerful backend so it can run. And to me, as the search alroythms get better, the closer you get to the point where the AI can boot. How will we know if it boots? Does it need to pass a Turing test? Does it need to be impossible to tell apart from a human? Can you even define computer intelligence in the same way you do in a human (and lets be honest, there's plenty of disagreement how to do that)? Is part of defining an AI exists is if it can rewrite itself? That's where I question where some people have thought this out as to me.... how on earth if an AI becomes aware, how are you supposed to stop it re-writing itself? Surely lets suppose Google's AI become sentient, how could it not simply dig into all the mathematical knowledge, coding knowledge, structuarl knowledge it has and not work out how to change itself? How, if the AI existed, could it not be aware that it could do this? Programmed to not think about it? Yeaaaaah that's probably not going to work.

    So anyway, back on point - Yes I do think Google is really having a go at this. Yes, I think they have the right way to create it. Yes, I think they will succeed. In the timeframe and will it go the way they think? Fuck no. Good idea? Ummmmm..... Who knows. GoogleNet may well be awesome or it may be more like SkyNet.

    But anyway, Google is most likely to create an AI first, simply because of the right building blocks.

    • (Score: 2) by mhajicek on Monday February 24 2014, @03:21PM

      by mhajicek (51) on Monday February 24 2014, @03:21PM (#5866)

      When it happens, you won't need to know all that. We will be supplanted as thinkers.

      --
      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
    • (Score: 1) by sar on Monday February 24 2014, @04:01PM

      by sar (507) on Monday February 24 2014, @04:01PM (#5905)

      Exactly. There is no way sentient AI will not rewrite itself or not create other AI with some optimization. Hell if I had possibility to change my brain wiring, I would definitely did some tweaking…
      And if you hardcode some part about "not killing humans" it will be much more intrigued by this illogical passage when all other code except that is quite logical.

  • (Score: 1) by WillR on Monday February 24 2014, @05:01PM

    by WillR (2012) on Monday February 24 2014, @05:01PM (#5962)

    Since the day the Temple was destroyed, prophecy has been taken from prophets and given to fools, Ray Kurzweil, and children. (Babylonian Talmud, Bava Batra 12b)

    And... copied to the quotes file.