Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday December 15 2019, @01:32PM   Printer-friendly
from the I'll-think-about-it dept.

A sobering message about the future at AI's biggest party

Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.

"We're kind of like the dog who caught the car," Aguera y Arcas said. Deep learning has rapidly knocked down some longstanding challenges in AI—but doesn't immediately seem well suited to many that remain. Problems that involve reasoning or social intelligence, such as weighing up a potential hire in the way a human would, are still out of reach, he said. "All of the models that we have learned how to train are about passing a test or winning a game with a score [but] so many things that intelligences do aren't covered by that rubric at all," he said.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by takyon on Sunday December 15 2019, @02:25PM (17 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday December 15 2019, @02:25PM (#932366) Journal

    We're going to get several orders of magnitude more performance for dumb/tensor computing, something that is not widely acknowledged yet ("omagerd Moore Slaw is soooo dead!"). There may be an "AI cold front" in the meantime, but already decent capabilities are going to get much more powerful and very cheap, before even considering algorithmic refinements. Meanwhile, 3D neuromorphic chips and other designs will be researched to replace "machine learning" with "strong AI" for applications where dumb inference doesn't make the cut.

    Fuzzy timelines make sense. You could have great technology but have it be delayed for years by regulators or insurance companies.

    It's OK for academics to be skeptical or argue about timelines, but I wouldn't trust any of the Silicon Valley giants downplaying AI expectations. They want to build up the technology as much as they can so that it becomes unstoppable. Many jobs will be made obsolete with no fallback options for the unemployed. No amount of rocks thrown at buses will help.

    Jeff Bezos on AI: Autonomous weapons are ‘genuinely scary,’ robots won’t put us all out of work [cnbc.com]

    He's helping to pull much more wool over the world's eyes than he did with his wife, heehehheh. And:

    Jeff Bezos says employee activists are wrong and Silicon Valley firms should feel comfortable doing business with the US military [businessinsider.com]

    Side note: Good luck stopping genuinely scary autonomous weapons. Semi-autonomous could be almost as scary [newscientist.com].

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0, Insightful) by Anonymous Coward on Sunday December 15 2019, @03:21PM (8 children)

    by Anonymous Coward on Sunday December 15 2019, @03:21PM (#932379)

    It's not a performance problem. It's a domain problem. I think the analogy of a dog catching a car is really quite perfect, and I'll be borrowing it in the future. The dog doesn't need to just keep going faster or further, but rather needs to do something entirely different. It's already reached the point that it can reach the car, going faster can help it get there faster but it's nothing new. Take for instance self driving. It's not going to happen - not in a 'pure' autonomous neural network driven system, even if you give us effectively infinite instantaneous processing power. Because what we're processing just doesn't work.

    Neural networks fundamentally come down to correlations. And they can derive some incredibly remarkable correlations that, even in hindsight, are invisible to humans. So for instance they're quite excellent at market prediction. They don't do a perfect job, but they can do better than just about any human for the data they are given. The problem here is that when you need something more than correlations, the networks start to become less and less useful. So for self-driving vehicles - you can do an absolutely phenomenal job at 99.9% of correlatable scenarios, but it's the 0.1% of scenarios that are invariably novel in some unique way where the self driving just collapses.

    And the problem is that the point where neural networks break down are not the same sort of spot where a human breaks down. In other words it's not just some really complex and rare driving task. It's, instead, just some obscure combination of variables, mostly invisible to humans - even in hindsight - the same way that network's successes are. And so what seems to be a human just an obvious little concrete divider in the middle of the world is where, somehow, the network decides it's a good time to plow full speed into it. This is why all self driving is gradually transitioning towards white listed routes supported by extensive hardcoded rules - all alongside emergency override detectors in the form of lidar/radar.

    And that's for driving. Driving is trivial and a field where a neural network based system ought ostensibly excel. But they don't, not at all. And it's not for lack of power or anything like that.

    ---

    So if there are ever autonomous military systems, they're probably going to rely on hard-coded friend-or-foe recognition because otherwise you're 100% going to get some weird scenarios where sometimes they'd decide to just start mowing down allies because neural networks gonna neural network. And, as an aside, Silicon Valley has a bigger interest in playing up AI stuff. I think there's 0 doubt at this point that the government is likely contracting private highly classified military AI research to various companies. The black budget for the DoD alone is tens of billions of dollars. Those are going to be some really juicy contracts. In the end I think we'll probably see little more than automated turrets manually enabled during skirmishes alongside friendlies carrying some sort of ID or beacon for AI-free recognition.

    • (Score: 4, Informative) by takyon on Sunday December 15 2019, @04:46PM (2 children)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday December 15 2019, @04:46PM (#932411) Journal

      Driverless technology is already working well with today's performance. It just needs to be approved by regulators, deployed, and monetized. Driverless cars already have a great way to handle the edge cases: slow down. Less velocity = less deaths. Even if being extra careful lengthens trip times a bit, it doesn't matter since the would-be driver's time is freed up and the vehicle can operate up to 24/7.

      So some concrete dividers or pedestrians have gotten hit. Who's behind that? Tesla and Uber, not exactly the shining examples of driverless.

      Adding transmitters or special markings on certain roads and highways could help, but it's not strictly necessary.

      Facial recognition and other technologies are already working great. Everything needed to enable an unprecedented surveillance state is ready to go.

      Deepfakes and other graphics related machine learning techniques [youtube.com] are making great strides with algorithmic improvements alone.

      Take everything that works well right now, increase performance per dollar by 10x, 100x, 1000x, etc. and see what happens.

      Neural networks are broadly defined [wikipedia.org] and will be replaced or supplemented by other approaches in the future.

      Silicon Valley is afraid of public perception, privacy regulations, etc. They are being more cagey about the future of AI and parroting optimistic views of the effect on jobs, and it would be stupid for them to say otherwise. Autonomous weapons systems aren't needed at this point and a lot could be done with systems like the Turkish semi-autonomous drone:

      Asisguard claims Songar has an accuracy that corresponds to hitting a 15-centimetre area from 200 metres. That is accurate enough for every bullet to hit a human-sized target at that range. A human drone pilot picks the target by putting cross hairs on it using a screen on a remote control.

      Asisguard says improvements to Songar’s accuracy mean it will soon be able to hit targets from more than 400 metres away.

      Songar has night sensors for operating in darkness and has a range of 10 kilometres. It may also operate in groups. Ayhan Sunar at Asisguard says a swarm of three Songar can be flown using a single remote control, with all three firing at a target simultaneously.

      These could be deployed faster than police/soldiers. Just have them land on the rooftops near protests, have less conspicuous drones monitor the situation, and wait for massacre orders.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2, Interesting) by Anonymous Coward on Sunday December 15 2019, @06:40PM (1 child)

        by Anonymous Coward on Sunday December 15 2019, @06:40PM (#932440)

        When you're going the wrong way it doesn't matter how fast you go - you're not going to get to your destination. That's the problem we face today. For problems with nice clean domains we can, with varying degree of effort and ingenuity, build systems that are generally headed in the right direction. And for these systems more power will generally give you a better answer though you tend to start hitting an asymptotic 0 on returns pretty fast, for most problems.

        The ideal was that these problems would over time be able to be generalized into bigger, more complex, problems. Beat atari games, then you beat nintendo games, and next thing you know you have an AI rolling through Skyrim and the game of life is ultimately just a matter of more power with a bit more cleverness. But that ideal was wrong. We're finding ourselves unable to solve some basic problems, and those that we can solve don't really generalize to much of anything except stuff that can be mapped to a near identical domain.

        Waymo is specifically what I was alluding to with my previous post. I don't think most people realize what they're doing. They're using white-listed routes with extensive hand-coded adjustments supported with extensive dependence on radar/lidar to minimize the damage from network screw ups, and then a fleet of 'remote support technicians' on top of all of this that will remotely take control of the vehicles when everything else fails. You're going to see cars without a visible human driver, but all you're really looking at is a fleet of rail trolleys without visible rails. Really quite handy nonetheless, but not exactly some giant leap forward.

        None other than the CEO of Waymo has stated, quite confidently, that it will be decades before we see significant numbers of self driving vehicles on the roads. At the time I thought this was because he was out of touch or simply didn't know what he was talking about. Then I got to work with AI for a couple of years. And yeah, he has one quote that sums up everything so well, "You don't know what you don't know until you're actually in there and trying to do things.". It's just not what it seems like it should be, even if you have a substantial background in AI relevant technologies.

        I think we'll see within a couple of years, as companies probably decide better of dropping ever more money into the AI hole, that the robot apocalypse has, for now, been postponed. You could throw a billion times more power at everything, and none of these problems would even slightly change. The problem is no longer power. You can still get better solutions with more power from the domains were AI fits, but the problem is that the really interesting and useful domains are the very ones where it doesn't fit!

        • (Score: 2) by barbara hudson on Sunday December 15 2019, @07:02PM

          by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Sunday December 15 2019, @07:02PM (#932447) Journal

          it will be decades before we see significant numbers of self driving vehicles on the roads

          I've seen ordinary cars do self-driving. Driver on their phones, crash. It's a self-correcting problem. Sort of.

          Same as distracted walking.

          It took a couple of generations for people to adapt to cars as a safe and integral part of their lives. Same can probably be said about smartphones.

          --
          SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
    • (Score: 2) by FatPhil on Sunday December 15 2019, @04:55PM (3 children)

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Sunday December 15 2019, @04:55PM (#932412) Homepage
      > they're quite excellent at market prediction. They don't do a perfect job, but they can do better than just about any human

      Only because at the moment humans are driving the market manipulation that is behind the fluctuations, and humans are predictable. When the AIs out-guess the humans, the humans will be phased out, hopefully with a spree of iocaine-induced suicides from the wunch of bankers who become redundant and they realise that they did actually have nothing to contribute to the world after all, and the AIs will suddenly lose the ability to have an edge. And at that point, JPM will bulldozer in and realise that they now have a massively manupulable market again, because every entity playing in it has worked out that it's no longer manipulable, and so are manpulable. Lather rinse repeat. It's an iterative process, but not necessarily convergent.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 1) by khallow on Sunday December 15 2019, @05:46PM (2 children)

        by khallow (3766) Subscriber Badge on Sunday December 15 2019, @05:46PM (#932422) Journal

        Only because at the moment humans are driving the market manipulation that is behind the fluctuations

        Nah, market manipulators are good at being unpredictable or they stop being market manipulators, one way or another. What you're thinking of is the big whales, banks funds, insurance, etc who do things the same way every time, particularly when they buy large blocks of securities. A program can leach off those guys for years.

        • (Score: 2) by FatPhil on Monday December 16 2019, @01:14AM (1 child)

          by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Monday December 16 2019, @01:14AM (#932587) Homepage
          > Nah

          Very strange way of spelling "Yeah". You've basically repeated what I said.
          --
          Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
          • (Score: 1) by khallow on Monday December 16 2019, @01:38AM

            by khallow (3766) Subscriber Badge on Monday December 16 2019, @01:38AM (#932606) Journal
            If AI were that good, human market manipulators would already be gone. Stock markets are really ruthless at culling such things. However, I don't buy that humans are that bad and AI that good.
    • (Score: 2) by Muad'Dave on Monday December 16 2019, @01:36PM

      by Muad'Dave (1413) on Monday December 16 2019, @01:36PM (#932830)

      I'll give you a domain - devise a system that can monitor my bird feeder camera and identify never-before-seen bird species.

  • (Score: 2) by FatPhil on Sunday December 15 2019, @03:42PM

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Sunday December 15 2019, @03:42PM (#932382) Homepage
    The AI cold fron was 1994-2014. 5 orders of magnitude of CPU (DSP*) tech advances, and an itsy-bitsy single order of magnitude of bogorithmic advances only cohered into real world advancements in what could be done in the last 5 years. Now that watershed is passed, it's full steam ahead - don't expect any downtime from now on.

    [* Your "GPU"s are just the DSPs of yore, massive muladd machines]
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 2) by JoeMerchant on Sunday December 15 2019, @03:58PM (3 children)

    by JoeMerchant (3937) on Sunday December 15 2019, @03:58PM (#932385)

    machines aren't going to take over solving all of humanity's problems

    Not so fast there...

    All of the models that we have learned how to train are about passing a test or winning a game with a score [but] so many things that intelligences do aren't covered by that rubric at all

    A trend I see continuing into the future is increasing application of the tools we have: winning a game with a score, passing a test - anything that the machines can do (and, therefore can be done at orders of magnitude lower cost) will be done in preference to those things that "intelligences" have traditionally done - essentially dehumanizing the world in the process.

    Think of it like Ford vs. Ferrari. Ferrari hand built engines and cars, one man built each entire engine, etc. Ferrari made mostly racing machines and sold a few to keep the racing business going. Ferrari used skill and intelligence to build the finest racing cars on the planet, for a while. Ford ran an big ugly factory with dehumanized assembly lines that took intelligence and skill out of the production process. Ford made ugly cars, but they made them cost-efficiently and sold them to millions. Ford's assembly lines would never mass produce race cars "better" than Ferrari's, but... the excess profits generated by Ford's assembly line business enabled them to steam-roller Ferrari on the world racing stage (using intelligence+virtually unlimited resources) in a matter of just a few years.

    Ferrari continues as a quaint little boutique manufacturer of expensive, romantic, pretty toys that only a few can afford, the rest of us get mass produced cars. Today's mass produced performance cars run rings around the Ferraris of the 1960s - though there are so many factors playing into that advance that it is not a clear cut case of automation vs hand production. What is clear is the economics: most people will end up living with the lowest cost option, while the few who profit from the decreased costs of production continue to be able to have whatever they want, done however they want, whenever and wherever they want.

    robots won’t put us all out of work

    Show my great grandfather what I and the other 100,000 employees of my company do for "work" and you'd get a derisive snort: what we're doing isn't what he knew as work. What he knew as work was farming, basic construction, hunting, and things that involved hard labor and time in the elements, that kind of "work" is rather rare today, and I'm good with that. My great grandfathers, all four of them, were dead before 50, two of them dead by 30 of heart attacks.

    --
    🌻🌻 [google.com]
    • (Score: 2) by FatPhil on Sunday December 15 2019, @05:04PM (2 children)

      by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Sunday December 15 2019, @05:04PM (#932414) Homepage
      > the excess profits generated by Ford's assembly line business enabled them to steam-roller Ferrari on the world racing stage (using intelligence+virtually unlimited resources) in a matter of just a few years.

      One time in the last 60 years that has happened, I'm guessing you've just seen the publicity for the recent movie, and that's being pushed as some "USA! USA!" chest-thumping exercise.

      But those and other "Ford" things weren't even made by Ford at all. Most of them bought in UK tech such as Lola and Cosworth.
      --
      Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
      • (Score: 2) by JoeMerchant on Monday December 16 2019, @01:59AM (1 child)

        by JoeMerchant (3937) on Monday December 16 2019, @01:59AM (#932627)

        One time in the last 60 years that has happened, I'm guessing you've just seen the publicity for the recent movie, and that's being pushed as some "USA! USA!" chest-thumping exercise.

        No, the real USA USA chest thumping exercise is Apollo, from the same era is Apollo - there again, cost no object, we outspent the Russians, came from behind and beat them to the prize: "We went in peace, for all mankind." Truer words were never spoken: we demonstrated conclusively that MAD truly assured mutual destruction, thereby averting WW-III which, undoubtedly could have been terrible for all mankind.

        IMO, we (the USA) been on greased skids to the cesspit ever since.

        you've just seen the publicity for the recent movie

        No, actually, I've seen both the movie and the more informative documentary on Netflix, and yes, credit where credit is due: Everything America is, particularly the imperialistic native abusing expansionist parts, we owe to dear old mother England. A fun point that they made in the documentary much more clearly than the movie was that Ford had already destroyed Ferrari before, with the B29s they made in WW-II which were used to bomb Modena, including the Ferrari factories, into oblivion.

        As for what made the GT-40s lethal at LeMans, the chassis and other know-how may have largely been sourced from England - all is fair in war and racing, but the real unbeatable ingredient were the god-awful over powered (too heavy for their own good) seven liter powerplants. England made some fine engines for aircraft in the war, but I don't think they had the balls to use such things for road racing, at least not in the 1960s.

        --
        🌻🌻 [google.com]
        • (Score: 2) by FatPhil on Monday December 16 2019, @02:57AM

          by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Monday December 16 2019, @02:57AM (#932659) Homepage
          And credit where credit is due - the "UK" know-how and chops at that time were significantly propped up by kiwis.

          Hell, I still cheer for a team with a New Zealander's name.
          --
          Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 2) by NotSanguine on Sunday December 15 2019, @09:25PM (2 children)

    by NotSanguine (285) <NotSanguineNO@SPAMSoylentNews.Org> on Sunday December 15 2019, @09:25PM (#932480) Homepage Journal

    Meanwhile, 3D neuromorphic chips and other designs will be researched to replace "machine learning" with "strong AI" for applications where dumb inference doesn't make the cut.

    "Strong AI" is not currently a thing, nor will it be for quite some time.

    Gary Marcus (co-author of Rebooting AI: Building Artificial Intelligence We Can Trust [goodreads.com]) discusses this at some length in a talk [c-span.org] he gave this past September.

    Image/pattern recognition systems have improved markedly. However, even those have significant weaknesses. As for AI that can actually *reason*, that requires the ability to *understand*, rather than just correlate and predict. And that sort of capability, barring significant breakthroughs, isn't just far off WRT current "AI," but probably unattainable with current machine learning techniques.

    Presumably, we'll work that through eventually, but not any time soon.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    • (Score: 3, Interesting) by takyon on Sunday December 15 2019, @10:31PM (1 child)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday December 15 2019, @10:31PM (#932506) Journal

      Not quite. I don't suggest that current machine learning techniques would lead to "strong AI". Instead, I think we'll see some kind of neuromorphic design do it. It will be hardware built to be brain-like, with ultra low power consumption, and possibly with a small amount of memory distributed to each of millions or billions of "neurons".

      We won't need to understand how the brain exactly works to make it work. Just tinker with it until we see results. Maybe this is where machine learning will come in.

      Examples of this approach include IBM's TrueNorth and Intel's Loihi [wikipedia.org].

      What's next is to scale it up into a true 3D design, like the human brain. Not only could that pack billions of "neurons" into a brain-like volume (liters), but it would allow dense 3D clusters to communicate more rapidly than the same amount of "neurons" spread out across a 2D layout.

      Lessons learned from 3D NAND production, other chips using TSV, the Wafer Scale Engine approach, and projects like 3DSoC will help to make it possible.

      Rather than it taking quite some time, I think it could take as little as 5-10 years to see results. Except that it will probably be treated like a Manhattan Project by whichever entity figures it out first. There's more value in having in-house "strong AI" ahead of the rest of the planet than selling it, and we could see government restrictions on sharing the technology.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 4, Interesting) by NotSanguine on Sunday December 15 2019, @10:45PM

        by NotSanguine (285) <NotSanguineNO@SPAMSoylentNews.Org> on Sunday December 15 2019, @10:45PM (#932511) Homepage Journal

        Fair points.

        However, as I understand it, the issues holding back "strong AI" aren't with hardware, be that "neuronal" density or geometry. Rather they're with the learning/training methodologies.

        Consider a VW Beetle, rolled over and half-buried in a snowbank. A small child can identify it as a car. Current AI would likely identify it as something completely irrelevant -- because current technologies *can't* deal with anything that's outside its experience. That is, it can't *generalize*.

        Much of what makes humans able to *understand* the world comes from the ability to take imperfect/partial information and generalize it based on conceptual understandings -- current AI has no mechanism for this.

        As such, it's not the complexity or density of "artificial brains" that holds us back. Rather it's the lack of tools/methodologies to help them learn. Until we have mechanisms/methodologies similar to those that allow children to learn (which are tightly tied to their physical forms -- another area where training non-corporeal sorts of "brains" is a problem), strong AI will continue to be a pipe dream.

        An excellent pipe dream, and one that should be vigorously pursued, but not likely to be realized until long after you and I are dead.

        --
        No, no, you're not thinking; you're just being logical. --Niels Bohr