Arthur T Knackerbracket has found the following story:
How far can it go? Will machines ever learn so well on their own that external guidance becomes a quaint relic? In theory, you could imagine an ideal Universal Learner—one that can decide everything for itself, and always prefers the best pattern for the task at hand.
But in 1996, computer scientist David Wolpert proved that no such learner exists. In his famous "No Free Lunch" theorems, he showed that for every pattern a learner is good at learning, there's another pattern that same learner would be terrible at picking up. The reason brings us back to my aunt's puzzle—to the infinite patterns that can match any finite amount of data. Choosing a learning algorithm just means choosing which patterns a machine will be bad at. Maybe all tasks of, say, visual pattern recognition will eventually fall to a single all-encompassing algorithm. But no learning algorithm can be good at learning everything.
This makes machine learning surprisingly akin to the human brain. As smart as we like to think we are, our brains don't learn perfectly, either. Each part of the brain has been delicately tuned by evolution to spot particular kinds of patterns, whether in what we see, in the language we hear, or in the way physical objects behave. But when it comes to finding patterns in the stock market, we're just not that good; the machines have us beat by far.
-- submitted from IRC
(Score: 1) by crafoo on Wednesday August 16 2017, @04:35PM
Fitting a set of 3 points with a curve is the task.
The merit function to determine the best type of curve to fit will be modeled around the cascading effects of the curve type used. How many people die per year due to the linear fit model versus polynomial fit? That is how algorithms will decide which curve type is "interesting". They can also simulate relative performance (deaths/year) of each fit type. They key is for us humans to pick the statistical outcomes we would like and to give the machines the tools to preform closed-loop merit analysis of their results.