Arthur T Knackerbracket has found the following story:
How far can it go? Will machines ever learn so well on their own that external guidance becomes a quaint relic? In theory, you could imagine an ideal Universal Learner—one that can decide everything for itself, and always prefers the best pattern for the task at hand.
But in 1996, computer scientist David Wolpert proved that no such learner exists. In his famous "No Free Lunch" theorems, he showed that for every pattern a learner is good at learning, there's another pattern that same learner would be terrible at picking up. The reason brings us back to my aunt's puzzle—to the infinite patterns that can match any finite amount of data. Choosing a learning algorithm just means choosing which patterns a machine will be bad at. Maybe all tasks of, say, visual pattern recognition will eventually fall to a single all-encompassing algorithm. But no learning algorithm can be good at learning everything.
This makes machine learning surprisingly akin to the human brain. As smart as we like to think we are, our brains don't learn perfectly, either. Each part of the brain has been delicately tuned by evolution to spot particular kinds of patterns, whether in what we see, in the language we hear, or in the way physical objects behave. But when it comes to finding patterns in the stock market, we're just not that good; the machines have us beat by far.
-- submitted from IRC
(Score: 2) by maxwell demon on Wednesday August 16 2017, @08:09AM (2 children)
That depends. What is the task that you are trying to solve? If the task is to fit a straight line, then fitting anything other than a straight line is not a good idea. If the task is to find a good model for the data, it is indeed a good idea to try to fit some other functions.
You usually want to minimize both the error and the number of parameters. But that is something you learned. It's not that you were presented with a cloud of data and the task "fit an arbitrary function to that" and your first thought was "well, it's probably a good idea to use few parameters". OK, you might have think "well, let's first try something simple, as it takes less effort." But that's again an universal optimization criterion that may be built into a general AI: Prefer solutions that require less effort.
The Tao of math: The numbers you can count are not the real numbers.
(Score: 2) by PiMuNu on Wednesday August 16 2017, @09:01AM (1 child)
> You usually want to minimize both the error and the number of parameters.
Right. So the proposal by the GP to just plug in an infinite number of algorithms to this problem doesnt really work. You still need to make a score function, and all GP has done is made the score function more complicated. We now need to add some meta-score to qualify the "goodness of the fitting function". Statements like
> and teach [the AI] how to apply each algorithm to any given type of question or problem
> [the AI] decides that three different answers are interesting
completely ignores the challenge of figuring out what the score function should be, which is the whole challenge of these sorts of minimisation/optimisation routines.
(Score: 2) by Runaway1956 on Wednesday August 16 2017, @01:42PM
When you begin researching something, you generally don't KNOW what answers you're looking for, or even what KIND of answers. Neither will the computer, if it's functioning anything like a human mind. It's searching, and finding potential answers. So, it searches and searches, finds two, six, or twenty nice solutions, and confers with it's associates - much like we do. What's wrong with all of that?