Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday December 15 2019, @01:32PM   Printer-friendly
from the I'll-think-about-it dept.

A sobering message about the future at AI's biggest party

Blaise Aguera y Arcas praised the revolutionary technique known as deep learning that has seen teams like his get phones to recognize faces and voices. He also lamented the limitations of that technology, which involves designing software called artificial neural networks that can get better at a specific task by experience or seeing labeled examples of correct answers.

"We're kind of like the dog who caught the car," Aguera y Arcas said. Deep learning has rapidly knocked down some longstanding challenges in AI—but doesn't immediately seem well suited to many that remain. Problems that involve reasoning or social intelligence, such as weighing up a potential hire in the way a human would, are still out of reach, he said. "All of the models that we have learned how to train are about passing a test or winning a game with a score [but] so many things that intelligences do aren't covered by that rubric at all," he said.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by NotSanguine on Sunday December 15 2019, @10:45PM

    Fair points.

    However, as I understand it, the issues holding back "strong AI" aren't with hardware, be that "neuronal" density or geometry. Rather they're with the learning/training methodologies.

    Consider a VW Beetle, rolled over and half-buried in a snowbank. A small child can identify it as a car. Current AI would likely identify it as something completely irrelevant -- because current technologies *can't* deal with anything that's outside its experience. That is, it can't *generalize*.

    Much of what makes humans able to *understand* the world comes from the ability to take imperfect/partial information and generalize it based on conceptual understandings -- current AI has no mechanism for this.

    As such, it's not the complexity or density of "artificial brains" that holds us back. Rather it's the lack of tools/methodologies to help them learn. Until we have mechanisms/methodologies similar to those that allow children to learn (which are tightly tied to their physical forms -- another area where training non-corporeal sorts of "brains" is a problem), strong AI will continue to be a pipe dream.

    An excellent pipe dream, and one that should be vigorously pursued, but not likely to be realized until long after you and I are dead.

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    Starting Score:    1  point
    Moderation   +2  
       Interesting=2, Total=2
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4