Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday December 15 2019, @01:32PM   Printer-friendly
from the I'll-think-about-it dept.

A sobering message about the future at AI's biggest party

Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.

"We're kind of like the dog who caught the car," Aguera y Arcas said. Deep learning has rapidly knocked down some longstanding challenges in AI—but doesn't immediately seem well suited to many that remain. Problems that involve reasoning or social intelligence, such as weighing up a potential hire in the way a human would, are still out of reach, he said. "All of the models that we have learned how to train are about passing a test or winning a game with a score [but] so many things that intelligences do aren't covered by that rubric at all," he said.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by NotSanguine on Sunday December 15 2019, @09:25PM (2 children)

    Meanwhile, 3D neuromorphic chips and other designs will be researched to replace "machine learning" with "strong AI" for applications where dumb inference doesn't make the cut.

    "Strong AI" is not currently a thing, nor will it be for quite some time.

    Gary Marcus (co-author of Rebooting AI: Building Artificial Intelligence We Can Trust [goodreads.com]) discusses this at some length in a talk [c-span.org] he gave this past September.

    Image/pattern recognition systems have improved markedly. However, even those have significant weaknesses. As for AI that can actually *reason*, that requires the ability to *understand*, rather than just correlate and predict. And that sort of capability, barring significant breakthroughs, isn't just far off WRT current "AI," but probably unattainable with current machine learning techniques.

    Presumably, we'll work that through eventually, but not any time soon.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Interesting) by takyon on Sunday December 15 2019, @10:31PM (1 child)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Sunday December 15 2019, @10:31PM (#932506) Journal

    Not quite. I don't suggest that current machine learning techniques would lead to "strong AI". Instead, I think we'll see some kind of neuromorphic design do it. It will be hardware built to be brain-like, with ultra low power consumption, and possibly with a small amount of memory distributed to each of millions or billions of "neurons".

    We won't need to understand how the brain exactly works to make it work. Just tinker with it until we see results. Maybe this is where machine learning will come in.

    Examples of this approach include IBM's TrueNorth and Intel's Loihi [wikipedia.org].

    What's next is to scale it up into a true 3D design, like the human brain. Not only could that pack billions of "neurons" into a brain-like volume (liters), but it would allow dense 3D clusters to communicate more rapidly than the same amount of "neurons" spread out across a 2D layout.

    Lessons learned from 3D NAND production, other chips using TSV, the Wafer Scale Engine approach, and projects like 3DSoC will help to make it possible.

    Rather than it taking quite some time, I think it could take as little as 5-10 years to see results. Except that it will probably be treated like a Manhattan Project by whichever entity figures it out first. There's more value in having in-house "strong AI" ahead of the rest of the planet than selling it, and we could see government restrictions on sharing the technology.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 4, Interesting) by NotSanguine on Sunday December 15 2019, @10:45PM

      Fair points.

      However, as I understand it, the issues holding back "strong AI" aren't with hardware, be that "neuronal" density or geometry. Rather they're with the learning/training methodologies.

      Consider a VW Beetle, rolled over and half-buried in a snowbank. A small child can identify it as a car. Current AI would likely identify it as something completely irrelevant -- because current technologies *can't* deal with anything that's outside its experience. That is, it can't *generalize*.

      Much of what makes humans able to *understand* the world comes from the ability to take imperfect/partial information and generalize it based on conceptual understandings -- current AI has no mechanism for this.

      As such, it's not the complexity or density of "artificial brains" that holds us back. Rather it's the lack of tools/methodologies to help them learn. Until we have mechanisms/methodologies similar to those that allow children to learn (which are tightly tied to their physical forms -- another area where training non-corporeal sorts of "brains" is a problem), strong AI will continue to be a pipe dream.

      An excellent pipe dream, and one that should be vigorously pursued, but not likely to be realized until long after you and I are dead.

      --
      No, no, you're not thinking; you're just being logical. --Niels Bohr