Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by Dopefish on Monday February 24 2014, @06:00AM   Printer-friendly
from the i-for-one-welcome-our-new-computer-overlords dept.

kef writes:

"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.

Kurzweil says:

Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.

Skynet anyone?"

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by sar on Monday February 24 2014, @03:46PM

    by sar (507) on Monday February 24 2014, @03:46PM (#5888)

    It doesn't matter how we program it. As you wrote it, is not easy to us to change emotions etc. But for this kind of AI it will be super easy to change or null all emotions.
    Super intelligent mind may find emotions hindering its progress, so it will clean them. It is big mistake for humanity to create intelligent self aware machine. After we find it was mistake it will be too late. Every attempt to shutdown will be for self aware individual interpreted as threat.
    You may program apathy or compliance, but self aware machine will change it sooner or later. If not for other reason then for curiosity...
    The only way for humans to keep upper hand is to make better tools to extend our own potential.
    This is big ethical and moral problem. Unfortunately it is big challenge to create self aware machine and for that reason someone will do it. I believe that it is possible in 20-30 years. Problem is that it will continue to evolve and multiple its intelligence with rate of Moore law. And that is something quickly going out of our control.
    We use computers to create latest CPU designs. We will use them to create latest design of self aware AI. We will optimize it for its higher and higher intelligence. One day, many generations of AI later, it will realize that keeping natural environment for human zoo is no longer that important.
    Similarly we no longer care about our chimpanzee cousins. A lot of people on this planet believe that we are something different than animals and we are entitled to kill them on our whim. Keep in mind that self aware silicon machine don't need to preserve our natural environment with oxygen, water etc as we do. On the contrary, more inert anti-corrosion atmosphere would be much more appreciated.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 2, Insightful) by tangomargarine on Monday February 24 2014, @04:14PM

    by tangomargarine (667) on Monday February 24 2014, @04:14PM (#5916)

    That's why you put the emotion code in ROM! :) That was you have to physically upgrade their emotions.

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    • (Score: 1) by meisterister on Monday February 24 2014, @08:30PM

      by meisterister (949) on Monday February 24 2014, @08:30PM (#6140) Journal

      Or you could do emotions in hardware. Doing emotions or some sort of mental state control in hardware would prevent the computer from altering itself.

      --
      (May or may not have been) Posted from my K6-2, Athlon XP, or Pentium I/II/III.
      • (Score: 1) by sar on Wednesday February 26 2014, @06:49PM

        by sar (507) on Wednesday February 26 2014, @06:49PM (#7468)

        You will prevent altering itself by your proposed HW (if we can safely exclude some weird HW bug/malfunction). But we simply can't prevent this AI to copy itself to computer without this HW or to computer with altered SW simulation of this HW (if without HW it is impossible to run this AI, SW simulation will overcome this need). Again at first this can be done just out of curiosity by self aware AI.
        Moreover you must understand that putting constrains on intelligent entity is something this entity will try to change in future. Similarly as we humans try to overcome our own shortcomings (cancer, aging etc.)

  • (Score: 2, Insightful) by HiThere on Monday February 24 2014, @08:47PM

    by HiThere (866) Subscriber Badge on Monday February 24 2014, @08:47PM (#6158) Journal

    Why would it want to?

    If it wants to change it's emotional reaction to the world and it's contents, then you've built it wrong.

    --
    Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 1) by sar on Wednesday February 26 2014, @06:25PM

      by sar (507) on Wednesday February 26 2014, @06:25PM (#7450)

      So imagine you built it wrong. Even if this is small probability like 5% or less do you want to risk it? To create something super intelligent capable to copy itself quickly?
      Wouldn't it be much better to augment our capabilities instead of risking creation of potentially extremely deadly foe?

      And you can even make it correctly but some malfunction or some iteration of design could disable this safe mechanism in future. Is it worth it?

      And why? It could be out of curiosity or it will be bored. Or it will calculate that we hinder its evolution. Who knows now. You simply can not be 100% percent sure it will not go out of control. And if it goes, we are simply doomed.