Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday December 15 2019, @01:32PM   Printer-friendly
from the I'll-think-about-it dept.

A sobering message about the future at AI's biggest party

Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.

"We're kind of like the dog who caught the car," Aguera y Arcas said. Deep learning has rapidly knocked down some longstanding challenges in AI—but doesn't immediately seem well suited to many that remain. Problems that involve reasoning or social intelligence, such as weighing up a potential hire in the way a human would, are still out of reach, he said. "All of the models that we have learned how to train are about passing a test or winning a game with a score [but] so many things that intelligences do aren't covered by that rubric at all," he said.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0, Insightful) by Anonymous Coward on Sunday December 15 2019, @03:21PM (8 children)

    by Anonymous Coward on Sunday December 15 2019, @03:21PM (#932379)

    It's not a performance problem. It's a domain problem. I think the analogy of a dog catching a car is really quite perfect, and I'll be borrowing it in the future. The dog doesn't need to just keep going faster or further, but rather needs to do something entirely different. It's already reached the point that it can reach the car, going faster can help it get there faster but it's nothing new. Take for instance self driving. It's not going to happen - not in a 'pure' autonomous neural network driven system, even if you give us effectively infinite instantaneous processing power. Because what we're processing just doesn't work.

    Neural networks fundamentally come down to correlations. And they can derive some incredibly remarkable correlations that, even in hindsight, are invisible to humans. So for instance they're quite excellent at market prediction. They don't do a perfect job, but they can do better than just about any human for the data they are given. The problem here is that when you need something more than correlations, the networks start to become less and less useful. So for self-driving vehicles - you can do an absolutely phenomenal job at 99.9% of correlatable scenarios, but it's the 0.1% of scenarios that are invariably novel in some unique way where the self driving just collapses.

    And the problem is that the point where neural networks break down are not the same sort of spot where a human breaks down. In other words it's not just some really complex and rare driving task. It's, instead, just some obscure combination of variables, mostly invisible to humans - even in hindsight - the same way that network's successes are. And so what seems to be a human just an obvious little concrete divider in the middle of the world is where, somehow, the network decides it's a good time to plow full speed into it. This is why all self driving is gradually transitioning towards white listed routes supported by extensive hardcoded rules - all alongside emergency override detectors in the form of lidar/radar.

    And that's for driving. Driving is trivial and a field where a neural network based system ought ostensibly excel. But they don't, not at all. And it's not for lack of power or anything like that.

    ---

    So if there are ever autonomous military systems, they're probably going to rely on hard-coded friend-or-foe recognition because otherwise you're 100% going to get some weird scenarios where sometimes they'd decide to just start mowing down allies because neural networks gonna neural network. And, as an aside, Silicon Valley has a bigger interest in playing up AI stuff. I think there's 0 doubt at this point that the government is likely contracting private highly classified military AI research to various companies. The black budget for the DoD alone is tens of billions of dollars. Those are going to be some really juicy contracts. In the end I think we'll probably see little more than automated turrets manually enabled during skirmishes alongside friendlies carrying some sort of ID or beacon for AI-free recognition.

    Starting Score:    0  points
    Moderation   0  
       Insightful=1, Overrated=1, Total=2
    Extra 'Insightful' Modifier   0  

    Total Score:   0  
  • (Score: 4, Informative) by takyon on Sunday December 15 2019, @04:46PM (2 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday December 15 2019, @04:46PM (#932411) Journal

    Driverless technology is already working well with today's performance. It just needs to be approved by regulators, deployed, and monetized. Driverless cars already have a great way to handle the edge cases: slow down. Less velocity = less deaths. Even if being extra careful lengthens trip times a bit, it doesn't matter since the would-be driver's time is freed up and the vehicle can operate up to 24/7.

    So some concrete dividers or pedestrians have gotten hit. Who's behind that? Tesla and Uber, not exactly the shining examples of driverless.

    Adding transmitters or special markings on certain roads and highways could help, but it's not strictly necessary.

    Facial recognition and other technologies are already working great. Everything needed to enable an unprecedented surveillance state is ready to go.

    Deepfakes and other graphics related machine learning techniques [youtube.com] are making great strides with algorithmic improvements alone.

    Take everything that works well right now, increase performance per dollar by 10x, 100x, 1000x, etc. and see what happens.

    Neural networks are broadly defined [wikipedia.org] and will be replaced or supplemented by other approaches in the future.

    Silicon Valley is afraid of public perception, privacy regulations, etc. They are being more cagey about the future of AI and parroting optimistic views of the effect on jobs, and it would be stupid for them to say otherwise. Autonomous weapons systems aren't needed at this point and a lot could be done with systems like the Turkish semi-autonomous drone:

    Asisguard claims Songar has an accuracy that corresponds to hitting a 15-centimetre area from 200 metres. That is accurate enough for every bullet to hit a human-sized target at that range. A human drone pilot picks the target by putting cross hairs on it using a screen on a remote control.

    Asisguard says improvements to Songar’s accuracy mean it will soon be able to hit targets from more than 400 metres away.

    Songar has night sensors for operating in darkness and has a range of 10 kilometres. It may also operate in groups. Ayhan Sunar at Asisguard says a swarm of three Songar can be flown using a single remote control, with all three firing at a target simultaneously.

    These could be deployed faster than police/soldiers. Just have them land on the rooftops near protests, have less conspicuous drones monitor the situation, and wait for massacre orders.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2, Interesting) by Anonymous Coward on Sunday December 15 2019, @06:40PM (1 child)

      by Anonymous Coward on Sunday December 15 2019, @06:40PM (#932440)

      When you're going the wrong way it doesn't matter how fast you go - you're not going to get to your destination. That's the problem we face today. For problems with nice clean domains we can, with varying degree of effort and ingenuity, build systems that are generally headed in the right direction. And for these systems more power will generally give you a better answer though you tend to start hitting an asymptotic 0 on returns pretty fast, for most problems.

      The ideal was that these problems would over time be able to be generalized into bigger, more complex, problems. Beat atari games, then you beat nintendo games, and next thing you know you have an AI rolling through Skyrim and the game of life is ultimately just a matter of more power with a bit more cleverness. But that ideal was wrong. We're finding ourselves unable to solve some basic problems, and those that we can solve don't really generalize to much of anything except stuff that can be mapped to a near identical domain.

      Waymo is specifically what I was alluding to with my previous post. I don't think most people realize what they're doing. They're using white-listed routes with extensive hand-coded adjustments supported with extensive dependence on radar/lidar to minimize the damage from network screw ups, and then a fleet of 'remote support technicians' on top of all of this that will remotely take control of the vehicles when everything else fails. You're going to see cars without a visible human driver, but all you're really looking at is a fleet of rail trolleys without visible rails. Really quite handy nonetheless, but not exactly some giant leap forward.

      None other than the CEO of Waymo has stated, quite confidently, that it will be decades before we see significant numbers of self driving vehicles on the roads. At the time I thought this was because he was out of touch or simply didn't know what he was talking about. Then I got to work with AI for a couple of years. And yeah, he has one quote that sums up everything so well, "You don't know what you don't know until you're actually in there and trying to do things.". It's just not what it seems like it should be, even if you have a substantial background in AI relevant technologies.

      I think we'll see within a couple of years, as companies probably decide better of dropping ever more money into the AI hole, that the robot apocalypse has, for now, been postponed. You could throw a billion times more power at everything, and none of these problems would even slightly change. The problem is no longer power. You can still get better solutions with more power from the domains were AI fits, but the problem is that the really interesting and useful domains are the very ones where it doesn't fit!

      • (Score: 2) by barbara hudson on Sunday December 15 2019, @07:02PM

        by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Sunday December 15 2019, @07:02PM (#932447) Journal

        it will be decades before we see significant numbers of self driving vehicles on the roads

        I've seen ordinary cars do self-driving. Driver on their phones, crash. It's a self-correcting problem. Sort of.

        Same as distracted walking.

        It took a couple of generations for people to adapt to cars as a safe and integral part of their lives. Same can probably be said about smartphones.

        --
        SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
  • (Score: 2) by FatPhil on Sunday December 15 2019, @04:55PM (3 children)

    by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Sunday December 15 2019, @04:55PM (#932412) Homepage
    > they're quite excellent at market prediction. They don't do a perfect job, but they can do better than just about any human

    Only because at the moment humans are driving the market manipulation that is behind the fluctuations, and humans are predictable. When the AIs out-guess the humans, the humans will be phased out, hopefully with a spree of iocaine-induced suicides from the wunch of bankers who become redundant and they realise that they did actually have nothing to contribute to the world after all, and the AIs will suddenly lose the ability to have an edge. And at that point, JPM will bulldozer in and realise that they now have a massively manupulable market again, because every entity playing in it has worked out that it's no longer manipulable, and so are manpulable. Lather rinse repeat. It's an iterative process, but not necessarily convergent.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 1) by khallow on Sunday December 15 2019, @05:46PM (2 children)

      by khallow (3766) Subscriber Badge on Sunday December 15 2019, @05:46PM (#932422) Journal

      Only because at the moment humans are driving the market manipulation that is behind the fluctuations

      Nah, market manipulators are good at being unpredictable or they stop being market manipulators, one way or another. What you're thinking of is the big whales, banks funds, insurance, etc who do things the same way every time, particularly when they buy large blocks of securities. A program can leach off those guys for years.

      • (Score: 2) by FatPhil on Monday December 16 2019, @01:14AM (1 child)

        by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Monday December 16 2019, @01:14AM (#932587) Homepage
        > Nah

        Very strange way of spelling "Yeah". You've basically repeated what I said.
        --
        Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
        • (Score: 1) by khallow on Monday December 16 2019, @01:38AM

          by khallow (3766) Subscriber Badge on Monday December 16 2019, @01:38AM (#932606) Journal
          If AI were that good, human market manipulators would already be gone. Stock markets are really ruthless at culling such things. However, I don't buy that humans are that bad and AI that good.
  • (Score: 2) by Muad'Dave on Monday December 16 2019, @01:36PM

    by Muad'Dave (1413) on Monday December 16 2019, @01:36PM (#932830)

    I'll give you a domain - devise a system that can monitor my bird feeder camera and identify never-before-seen bird species.