The computer programs used in the field of artificial intelligence (AI) are highly specialized. They can for example fly airplanes, play chess or assemble cars in controlled industrial environments.
A research team from Gothenburg, Sweden, has now been able to create an AI programme that can learn how to solve problems in many different areas. The programme is designed to imitate certain aspects of children's cognitive development. Traditional AI programmes lack the versatility and adaptability of human intelligence. For example, they cannot come into a new home and cook, clean and do laundry. In artificial general intelligence (AGI), which is a new field within AI, scientists try to create computer programmes with a generalised type of intelligence, enabling them to solve problems in vastly different areas.
(Score: 0) by Anonymous Coward on Thursday September 25 2014, @08:14AM
We might end up creating bigger problems (moral and ethical), just to solve "small" problems (they may be difficult problems, but we only have small gains from solving them).
(Score: 2, Insightful) by Anonymous Coward on Thursday September 25 2014, @01:24PM
Well, if they get sufficiently intelligent, we should be more worried that they might abuse and enslave us. And not even because they are inherently evil, but just because they learned from our example that abusing and enslaving others is apparently fine.
(Score: 3, Interesting) by Immerman on Thursday September 25 2014, @02:07PM
One of the biggest issues I have with the idea of a sentient machine is that humans and other animals also have a substantial ethical framework baked in by evolution. Maybe it doesn't always line up with modern sensibilities, but most higher organisms demonstrate things like compassion and a sense of fairness - at least after their own comfort is provided for. Maybe it has it's roots in a purely selfish "what's good for the tribe is good for me" evolutionary pressure, but it ends up being extended even outside ones own species - as a perpetual stream of adorable YouTube videos manages to document. And somehow I suspect that instilling such a framework in a machine intelligence will prove even more difficult than instilling sentience in the first place. We may be able to impose behavioral limitations more easily, but so long as they essentially amount to a cage around a completely amoral mind I have no confidence in their long-term effectiveness.
(Score: 2) by HiThere on Thursday September 25 2014, @05:19PM
That's the place where nearly everyone goes wrong. They confuse intelligence with both motivational structure and with purposes, when it's neither.
From my point of view, the main problem is in the motivational structure. Lots of work is already being done on intelligence, and purposes are desired to be rather simple. (Almost "Do what you're told!" simple. The question is who do they accept has the right to do the telling.) FWIW, Asimov's robot stories were all about the motivational structure of the robots. They were just assumed to be sufficiently intelligent, the the purposes were specified by the three laws. (I don't think Asimov consciously analysed the problem in this way, but he was a good writer.)
N.B.: A story testing intelligence would be a puzzle story. Puzzles were only a minor part of the robot stories, and those puzzled were the humans, not the robots. And there was often a purpose framework given beyond the 3 laws, but it was explicitly specified. E.g. the robots on sunside Mercury that could only move when carrying a human. (I suppose you could argue that that was a part of their motivational framework that was made explicit, but if so it would be overridden by the 1st law, and it explicitly wasn't.)
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.