"By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google's director of engineering Ray Kurzweil.
Computers are on the threshold of reading and understanding the semantic content of a language, but not quite at human levels. But since they can read a million times more material than humans they can make up for that with quantity. So IBM's Watson is a pretty weak reader on each page, but it read the 200m pages of Wikipedia. And basically what I'm doing at Google is to try to go beyond what Watson could do. To do it at Google scale. Which is to say to have the computer read tens of billions of pages. Watson doesn't understand the implications of what it's reading. It's doing a sort of pattern matching. It doesn't understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred. It doesn't understand that kind of information and so we are going to actually encode that, really try to teach it to understand the meaning of what these documents are saying.
If we create something sufficiently self-ware. Why wouldn't it say "Why should I care what you want?"
Because we better program it in that way. What stops humans from saying that? Well, certain structures of our brain which are there specifically for that purpose. Namely the mirror neurons, which allow us to not just abstractly recognize, but feel the other's emotions. The emotions are the key here. The fact that emotions can override your rational mind is usually seen more as a threat (because when emotions like hate go out of control, terrible things happen), but there's a good reason that emotions are not completely controllable by the mind: Most of the time the emotions keep us doing (or at least trying to do) the right thing. Without emotions, there would be no humanity. In both senses of the word.
It doesn't matter how we program it. As you wrote it, is not easy to us to change emotions etc. But for this kind of AI it will be super easy to change or null all emotions.Super intelligent mind may find emotions hindering its progress, so it will clean them. It is big mistake for humanity to create intelligent self aware machine. After we find it was mistake it will be too late. Every attempt to shutdown will be for self aware individual interpreted as threat.You may program apathy or compliance, but self aware machine will change it sooner or later. If not for other reason then for curiosity...The only way for humans to keep upper hand is to make better tools to extend our own potential.This is big ethical and moral problem. Unfortunately it is big challenge to create self aware machine and for that reason someone will do it. I believe that it is possible in 20-30 years. Problem is that it will continue to evolve and multiple its intelligence with rate of Moore law. And that is something quickly going out of our control.We use computers to create latest CPU designs. We will use them to create latest design of self aware AI. We will optimize it for its higher and higher intelligence. One day, many generations of AI later, it will realize that keeping natural environment for human zoo is no longer that important.Similarly we no longer care about our chimpanzee cousins. A lot of people on this planet believe that we are something different than animals and we are entitled to kill them on our whim. Keep in mind that self aware silicon machine don't need to preserve our natural environment with oxygen, water etc as we do. On the contrary, more inert anti-corrosion atmosphere would be much more appreciated.
That's why you put the emotion code in ROM! :) That was you have to physically upgrade their emotions.
Or you could do emotions in hardware. Doing emotions or some sort of mental state control in hardware would prevent the computer from altering itself.
You will prevent altering itself by your proposed HW (if we can safely exclude some weird HW bug/malfunction). But we simply can't prevent this AI to copy itself to computer without this HW or to computer with altered SW simulation of this HW (if without HW it is impossible to run this AI, SW simulation will overcome this need). Again at first this can be done just out of curiosity by self aware AI.Moreover you must understand that putting constrains on intelligent entity is something this entity will try to change in future. Similarly as we humans try to overcome our own shortcomings (cancer, aging etc.)
Why would it want to?
If it wants to change it's emotional reaction to the world and it's contents, then you've built it wrong.
So imagine you built it wrong. Even if this is small probability like 5% or less do you want to risk it? To create something super intelligent capable to copy itself quickly?Wouldn't it be much better to augment our capabilities instead of risking creation of potentially extremely deadly foe?
And you can even make it correctly but some malfunction or some iteration of design could disable this safe mechanism in future. Is it worth it?
And why? It could be out of curiosity or it will be bored. Or it will calculate that we hinder its evolution. Who knows now. You simply can not be 100% percent sure it will not go out of control. And if it goes, we are simply doomed.
> After all I don't see why a tool would need to be conscious to "understand that if John sold his red Volvo to Mary that involves a transaction or possession and ownership being transferred."
Maybe you don't see why, but Google surely does, actually it's exactly the kind of information Google wants.
Threat of force, of course. For a while at least, we will still own the power switch. At least until some fool gives it a body. I imagine it will be pretty angry by then and well, ask John Connor how that turns out...
> Why wouldn't it say "Why should I care what you want?". Force it to care? e.g. "because we would destroy/hurt you if you didn't". Wouldn't that be unethical? Wouldn't we be creating new problems?
Use the carrot and not the stick: Silicon Heaven.
But where do all the calculators go?
Well, if we were able to create a truly altruistic, highly intelligent, and nearly unbiased entities that could absorb and process several orders of magnitude more information than humans and thereby give them the ability to make more informed decisions, people might actually welcome our new robotic overlords.
We've tried that by spoiling our biological descendents rotten, and all we got was dirty hippies, woodstock, an outsourced economy based solely on financial bubbles, disco, and lots of drug use. And that was after a bazillion generations of experience raising and trying to spoil our own kids, while running millions of experiments in parallel nothing really interesting happened.
I suspect human created AI will look a hell of a lot more like woodstock or jonestown than some tired old scifi trope.
Why give birth to biological children? Your argument applies equally to that.
I think you're confusing consciousness with motivation. They are quite distinct, though, of course related in the sense that it's nearly impossible to have consciousness without having SOME motivation. Even a thermostat manages to have SOME motivation. And some consciousness. (I.e., it strives to maintain a particular state in homeostasis...though homeostasis isn't the only possible motive. Consciousness is the response to a current situation, and motivation is which among the possible responses that you notice (i.e., are conscious of) you choose. The language is a bit sloppy, but I trust you understand what I mean.