Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Tuesday February 07 2017, @05:18PM   Printer-friendly
from the this-is-the-way-the-world-ends-not-with-a-bang-but-a-goto dept.

Forget super-AI. Crappy AI is more likely to be our downfall, argues researcher.

[...] It's not that computer scientists haven't argued against AI hype, but an academic you've never heard of (all of them?) pitching the headline "AI is hard" is at a disadvantage to the famous person whose job description largely centers around making big public pronouncements. This month that academic is Alan Bundy, a professor of automated reasoning at the University of Edinburgh in Scotland, who argues in the Communications of the ACM that there is a real AI threat, but it's not human-like machine intelligence gone amok. Quite the opposite: the danger is instead shitty AI. Incompetent, bumbling machines.

Bundy notes that most all of our big-deal AI successes in recent years are extremely narrow in scope. We have machines that can play Jeopardy and Go—at tremendous cost in both cases—but that's nothing like general intelligence.

https://motherboard.vice.com/en_us/article/the-real-threat-is-machine-incompetence-not-intelligence

An interesting take on the AI question. What do Soylentils think of this scenario ?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by choose another one on Tuesday February 07 2017, @09:55PM

    by choose another one (515) Subscriber Badge on Tuesday February 07 2017, @09:55PM (#464318)

    Dumb machines are going to be the problem - TFA doesn't quite put it that way, but I think that is the biggest threat.

    The biggest issues with self driving cars, for instance, are likely (IMO) to be the same issues we get with dumb drivers - not the complex moral decisions about which kid do I kill if it goes pear shaped. It'll be things like following the white lines (into a hole) instead of the cones at roadworks, when someone helpfully puts up a sign saying "bridge out" (because it is) the self-driving car won't recognise it as an official sign and will ignore it, or the ones that simply drive into rivers because the sat nav DB says there a road there. Thing is, it won't be just a few drivers on each road like it is now, it'll be all of them, one after another.

    Biggest threat though is from dumb-but-autonomous weaponised machines. Real AI isn't needed, the terminator doesn't need to hold a conversation, in fact that is "better" - the enemy can't reason with it. AI might get to the point where it is as intelligent as an ant, and robotics may get to the mobility and lethality of a large dog. Now put those two together in large enough numbers (or with self replication) set target=human and we are f***ed. To my mind though it is a toss up whether we'll get there through machines or through virus/nanotech - in fact if we create an artificial virus is it a self-replicating nano-machine or is it alive, does the distinction even matter?

    Dumb is also not what people are watching for - we are busy planning for a mythical enemy smarter than ourselves and could end up with the castle falling to a simple full-frontal assault from legions of cannon fodder. The zombies don't need the intelligence work out how to fly over the wall, simple goal-seek behaviour + a big enough pile of them, and they just climb over it.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3