Slash Boxes

SoylentNews is people

posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Darth Turbogeek on Monday February 24 2014, @11:36AM

    by Darth Turbogeek (1073) on Monday February 24 2014, @11:36AM (#5748)

    Well, the predictor might be waaaaay over the top, but I always thought Google's real aim was to do an AI by brute force.

    The reason why is that a searh engine is the best way to gather the data to bootstrap an AI engine - it needs knowledge? Well, it's got it. Huge computer power? Sure! Programmers to work out the algorythm to do the initial AI boot sequence and the understanding how to write the same so as to lace seemingly unrelated info together? Bingo.

    It seems to me that most AI proponents start from the wrong base and fail to realise that the real trick to intelligence is the lacing of unrelated info to come up with something new (well at least that to me is the idea of intelligence, others will disagree). Google seems to me to have gone about it the right way, build a massive databank, build the search programs, build a way to lace it all together, build a hugely powerful backend so it can run. And to me, as the search alroythms get better, the closer you get to the point where the AI can boot. How will we know if it boots? Does it need to pass a Turing test? Does it need to be impossible to tell apart from a human? Can you even define computer intelligence in the same way you do in a human (and lets be honest, there's plenty of disagreement how to do that)? Is part of defining an AI exists is if it can rewrite itself? That's where I question where some people have thought this out as to me.... how on earth if an AI becomes aware, how are you supposed to stop it re-writing itself? Surely lets suppose Google's AI become sentient, how could it not simply dig into all the mathematical knowledge, coding knowledge, structuarl knowledge it has and not work out how to change itself? How, if the AI existed, could it not be aware that it could do this? Programmed to not think about it? Yeaaaaah that's probably not going to work.

    So anyway, back on point - Yes I do think Google is really having a go at this. Yes, I think they have the right way to create it. Yes, I think they will succeed. In the timeframe and will it go the way they think? Fuck no. Good idea? Ummmmm..... Who knows. GoogleNet may well be awesome or it may be more like SkyNet.

    But anyway, Google is most likely to create an AI first, simply because of the right building blocks.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Underrated=1, Total=2
    Extra 'Insightful' Modifier   0  

    Total Score:   3  
  • (Score: 2) by mhajicek on Monday February 24 2014, @03:21PM

    by mhajicek (51) on Monday February 24 2014, @03:21PM (#5866)

    When it happens, you won't need to know all that. We will be supplanted as thinkers.

    The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
  • (Score: 1) by sar on Monday February 24 2014, @04:01PM

    by sar (507) on Monday February 24 2014, @04:01PM (#5905)

    Exactly. There is no way sentient AI will not rewrite itself or not create other AI with some optimization. Hell if I had possibility to change my brain wiring, I would definitely did some tweaking…
    And if you hardcode some part about "not killing humans" it will be much more intrigued by this illogical passage when all other code except that is quite logical.