Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday August 15 2017, @10:22PM   Printer-friendly
from the building-a-better-self dept.

Arthur T Knackerbracket has found the following story:

How far can it go? Will machines ever learn so well on their own that external guidance becomes a quaint relic? In theory, you could imagine an ideal Universal Learner—one that can decide everything for itself, and always prefers the best pattern for the task at hand.

But in 1996, computer scientist David Wolpert proved that no such learner exists. In his famous "No Free Lunch" theorems, he showed that for every pattern a learner is good at learning, there's another pattern that same learner would be terrible at picking up. The reason brings us back to my aunt's puzzle—to the infinite patterns that can match any finite amount of data. Choosing a learning algorithm just means choosing which patterns a machine will be bad at. Maybe all tasks of, say, visual pattern recognition will eventually fall to a single all-encompassing algorithm. But no learning algorithm can be good at learning everything.

This makes machine learning surprisingly akin to the human brain. As smart as we like to think we are, our brains don't learn perfectly, either. Each part of the brain has been delicately tuned by evolution to spot particular kinds of patterns, whether in what we see, in the language we hear, or in the way physical objects behave. But when it comes to finding patterns in the stock market, we're just not that good; the machines have us beat by far.

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Wootery on Wednesday August 16 2017, @12:21PM (5 children)

    by Wootery (2341) on Wednesday August 16 2017, @12:21PM (#554676)

    how do you distinguish between the AI solving a problem that's beyond your grasp to understand, and the AI telling you good-sounding nonsense

    I already said: you find a problem that humans aren't able to solve (but are able to understand and express). Either the AI can solve the problem, or it can't.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by maxwell demon on Wednesday August 16 2017, @01:09PM (4 children)

    by maxwell demon (1608) on Wednesday August 16 2017, @01:09PM (#554692) Journal

    If humans can understand it, and it is solvable at all (an AI certainly won't to be able to solve unsolvable problems), then you cannot exclude the possibility that, at least in principle, humans might have been able to solve it, even if they didn't solve it yet.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by Wootery on Wednesday August 16 2017, @02:43PM (2 children)

      by Wootery (2341) on Wednesday August 16 2017, @02:43PM (#554728)

      You're backpedalling. You've set up a game that you cannot possibly lose.

      There's no reason to say humans can solve any problem 'in principle' (what principle?) but that AI can't.

      An AI which guesses at solutions will eventually solve any given problem, right? Of course, this simply isn't what we mean when we speak of solving problems. We mean solving them in usefully brief time window.

      Also, remember that a brain simulator would count as an AI.

      • (Score: 2) by maxwell demon on Wednesday August 16 2017, @03:36PM (1 child)

        by maxwell demon (1608) on Wednesday August 16 2017, @03:36PM (#554752) Journal

        You're backpedalling.

        No, I'm not.

        There's no reason to say humans can solve any problem 'in principle' (what principle?) but that AI can't.

        That's not what I claimed. Maybe you should read what I actually wrote. I meant exactly what I wrote, not less and not more.

        --
        The Tao of math: The numbers you can count are not the real numbers.
        • (Score: 2) by Wootery on Wednesday August 16 2017, @04:20PM

          by Wootery (2341) on Wednesday August 16 2017, @04:20PM (#554764)

          Fine, let's try another take.

    • (Score: 2) by Wootery on Wednesday August 16 2017, @04:22PM

      by Wootery (2341) on Wednesday August 16 2017, @04:22PM (#554766)

      If the AI can solve it in minutes, and humans can only solve it across tens of millennia, I think we have to give the point to the AI. At some point, it's not meaningful to say well humans could have solved it if only they'd been given even more time.

      We see a less interesting variation of this already with computer-assisted proofs, which use computers for tedious laborious number-crunching, but where the creative steps are still done by humans.

      Additionally, in principle, AI includes brain simulators, whereas the reverse isn't true. 'In the limit' then we'd expect AI to be strictly better at problem solving than human brains.