Physicists, philosophers, professors, authors, cognitive scientists, and many others have weighed in on edge.org's annual question 2015: What do you think about machines that think? See all 186 responses here
Also, what do you think?
My 2ยข: There's been a lot of focus on potential disasters that are almost certainly not going to happen. E.g. a robot uprising, or mass poverty through unemployment. Most manufacturers of artificial intelligence won't program their machines to seek self preservation at the expense of their human masters. It wouldn't sell. Secondly, if robots can one day produce almost everything we need, including more robots, with almost no human labour required, then robot-powered factories will become like libraries: relatively cheap to maintain, plentiful, and a public one will be set up in every town or suburb, for public use. If you think the big corporations wouldn't allow it, why do they allow public libraries?
(Score: 2) by bzipitidoo on Friday January 23 2015, @04:45AM
Certainly we must be a little cautious, so that we don't create the likes of SkyNet. Won't be hard, and I'm not that worried about it. The robot apocalypse won't come. Sure, robots can do many things we can't, but machinery has a long way to go yet to equal the product of billions of years of evolution. Consider how crude an airplane is next to a bird, despite being able to fly much higher and faster than any bird can. When it gets close, we will likely find that either robotics can't improve on animal machinery for many purposes, or that we can incorporate these robotic advances into our own bodies, becoming cyborgs. And not cyborgs like the rather cheesy Borg of STNG, with exposed tubing, the human hand downgraded to a needle shaped rotating network interface thing, and sluggish, zombie-like movement and a sort of sinister zombie-like disregard of the presence of potentially hostile aliens. The story writers were only trying to make the Borg as creepy as possible, and resorted to the tripe typical of slasher flicks.
No, it will start by integrating something like a tablet computer directly into our bodies. No screen will be needed because it will tap directly into the optic nerve or the vision area of our brains, put a sort of heads up display on our eyeballs, and no keyboard or mouse will be needed either, we will simply be able to think the input we want to enter. At the same time, we will improve our bodies, sort of like Wolverine from the X-Men. We already do it now, if only crudely, giving people artificial hips, knees, teeth, and even hearts. It will turn sports on its ear. We've already had a little taste of it with the "Blade Runner" maybe running faster than is possible for anyone with legs.
(Why do they allow public libraries? It's not a question of allowing, it's that they don't have the power to shut public libraries down. Some would if they could.)
(Score: 2) by Freeman on Friday January 23 2015, @05:29PM
For some reason I kept thinking about this STNG Episode while reading your post. http://en.wikipedia.org/wiki/The_Game_(Star_Trek:_The_Next_Generation) [wikipedia.org]
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by HiThere on Friday January 23 2015, @06:44PM
Variations on Cyborgs are already here, and will clearly integrate more capabilities. That doesn't mean that robots won't also become more capable.
And the argument that "they won't implement a feature because they don't want it" ignores the nature of intelligence, artificial or not. Intelligence is general purpose. What's important is the details of the motivational structure, and *I* at least don't understand that well enough to know what any particular implementation could lead to. (More particularly, I can understand *SOME* of the things that might result, but by no means all.)
It's important to remember that intelligence requires learning. And it's quite difficult to put useful bounds around what can be learned except via limitations on interest. So a "robot uprising" is probably easy to prevent, but this wouldn't keep them from taking over via some other route. And, in fact, I expect them to be pushed into taking over by people. The route that I see is that they will start by learning to become effective advisors, whose advice you are better off taking, and then laziness on the part of people will automate the acceptance of that advice. (Spam filters already censor our emails in this way...but would you want to try without them?)
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.