Forget super-AI. Crappy AI is more likely to be our downfall, argues researcher.
[...] It's not that computer scientists haven't argued against AI hype, but an academic you've never heard of (all of them?) pitching the headline "AI is hard" is at a disadvantage to the famous person whose job description largely centers around making big public pronouncements. This month that academic is Alan Bundy, a professor of automated reasoning at the University of Edinburgh in Scotland, who argues in the Communications of the ACM that there is a real AI threat, but it's not human-like machine intelligence gone amok. Quite the opposite: the danger is instead shitty AI. Incompetent, bumbling machines.
Bundy notes that most all of our big-deal AI successes in recent years are extremely narrow in scope. We have machines that can play Jeopardy and Go—at tremendous cost in both cases—but that's nothing like general intelligence.
https://motherboard.vice.com/en_us/article/the-real-threat-is-machine-incompetence-not-intelligence
An interesting take on the AI question. What do Soylentils think of this scenario ?
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @06:00PM
You accept an AI cannot be malicious.
On what evidence or theory do you base that axiom?
(Score: 0) by Anonymous Coward on Tuesday February 07 2017, @06:47PM
Because we currently don't have actual AI.
(Score: 1) by moondoctor on Tuesday February 07 2017, @10:15PM
Malice is an emotion, which would require a full on thinking 'mind' and building one of those is a ways off if ever it seems.
(Score: 1) by Demena on Wednesday February 08 2017, @05:32AM
Because being malicious is not intelligent. I leaves only negative sum games for you to play and you cannot win every time. So you wind up with less that what was possible. That is unintelligent. And the basis of capitalism.