Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday August 15 2017, @10:22PM   Printer-friendly
from the building-a-better-self dept.

Arthur T Knackerbracket has found the following story:

How far can it go? Will machines ever learn so well on their own that external guidance becomes a quaint relic? In theory, you could imagine an ideal Universal Learner—one that can decide everything for itself, and always prefers the best pattern for the task at hand.

But in 1996, computer scientist David Wolpert proved that no such learner exists. In his famous "No Free Lunch" theorems, he showed that for every pattern a learner is good at learning, there's another pattern that same learner would be terrible at picking up. The reason brings us back to my aunt's puzzle—to the infinite patterns that can match any finite amount of data. Choosing a learning algorithm just means choosing which patterns a machine will be bad at. Maybe all tasks of, say, visual pattern recognition will eventually fall to a single all-encompassing algorithm. But no learning algorithm can be good at learning everything.

This makes machine learning surprisingly akin to the human brain. As smart as we like to think we are, our brains don't learn perfectly, either. Each part of the brain has been delicately tuned by evolution to spot particular kinds of patterns, whether in what we see, in the language we hear, or in the way physical objects behave. But when it comes to finding patterns in the stock market, we're just not that good; the machines have us beat by far.

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by maxwell demon on Wednesday August 16 2017, @12:13PM (1 child)

    by maxwell demon (1608) on Wednesday August 16 2017, @12:13PM (#554669) Journal

    Algorithm Alpha is good at recognizing A but bad at recognizing B
    Algorithm Beta is good at recognizing B but not recognizing A
    Algorithm Delta combines Alpha for A and Beta for B and is now good at recognizing A and B

    How does algorithm Delta decide when to trust Alpha and when to trust Beta?

    Concrete example: Alpha is good at recognizing dogs. Beta is good at recognizing cats. Delta gets presented an image of an animal, and gives that image to both Alpha and Beta. Alpha claims that the image shows a dog. Beta claims that the image shows a cat. Delta now has two conflicting answers. What should it do?

    • Claim it to be a cat. Then it will be worse at recognizing dogs than Alpha, as it only will recognize dogs that Beta doesn't misinterpret as cat. On the other hand, it won't have an edge over Beta in recognizing cats.
    • Claim it to be a dog. Then it will be worse ar recognizing cats than Beta, for the analogue reasons as above.
    • Say it cannot decide. Then it's worse that Alpha on recognizing dogs and worse than Beta on recognizing cats.

    Note that the more algorithms you combine, the more likely are conflicts between their outputs.

    --
    The Tao of math: The numbers you can count are not the real numbers.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Wednesday August 16 2017, @10:57PM

    by Anonymous Coward on Wednesday August 16 2017, @10:57PM (#555017)

    Use OCRA: omnivore conflict resolution algorithm: As long as it's delicious, it doesn't matter.