Forget super-AI. Crappy AI is more likely to be our downfall, argues researcher.
[...] It's not that computer scientists haven't argued against AI hype, but an academic you've never heard of (all of them?) pitching the headline "AI is hard" is at a disadvantage to the famous person whose job description largely centers around making big public pronouncements. This month that academic is Alan Bundy, a professor of automated reasoning at the University of Edinburgh in Scotland, who argues in the Communications of the ACM that there is a real AI threat, but it's not human-like machine intelligence gone amok. Quite the opposite: the danger is instead shitty AI. Incompetent, bumbling machines.
Bundy notes that most all of our big-deal AI successes in recent years are extremely narrow in scope. We have machines that can play Jeopardy and Go—at tremendous cost in both cases—but that's nothing like general intelligence.
https://motherboard.vice.com/en_us/article/the-real-threat-is-machine-incompetence-not-intelligence
An interesting take on the AI question. What do Soylentils think of this scenario ?
(Score: 2) by sgleysti on Tuesday February 07 2017, @07:21PM
Simulating human consciousness with a computer would be an interesting philosophical and computational exercise and could be instrumental in accelerating the progress of psychology.
Personally, I think the ultimate goal of AI should be to transcend human intelligence and that a sufficiently advanced AI would be far more effective than humans at government or corporate management. Of course I doubt that humans would ever collectively agree on the objectives that such a system would be constructed to have, much less submit to following its dictates.
(Score: 1, Interesting) by Anonymous Coward on Tuesday February 07 2017, @07:45PM
You brought to my mind this short story [nature.com].