A recent XKCD https://xkcd.com/1897/ convinced me to read a recent Tech Review article on Geoffrey Hinton, Is AI Riding a One-Trick Pony? After a short intro to set the stage in Toronto at the Vector Institute, it gives a bit of history:
In the 1980s Hinton was, as he is now, an expert on neural networks, a much-simplified model of the network of neurons and synapses in our brains. However, at that time it had been firmly decided that neural networks were a dead end in AI research. Although the earliest neural net, the Perceptron, which began to be developed in the 1950s, had been hailed as a first step toward human-level machine intelligence, a 1969 book by MIT's Marvin Minsky and Seymour Papert, called Perceptrons, proved mathematically that such networks could perform only the most basic functions. These networks had just two layers of neurons, an input layer and an output layer. Nets with more layers between the input and output neurons could in theory solve a great variety of problems, but nobody knew how to train them, and so in practice they were useless. Except for a few holdouts like Hinton, Perceptrons caused most people to give up on neural nets entirely.
Hinton's breakthrough, in 1986, was to show that backpropagation could train a deep neural net, meaning one with more than two or three layers. But it took another 26 years before increasing computational power made good on the discovery. A 2012 paper by Hinton and two of his Toronto students showed that deep neural nets, trained using backpropagation, beat state-of-the-art systems in image recognition. "Deep learning" took off. To the outside world, AI seemed to wake up overnight. For Hinton, it was a payoff long overdue.
While huge investments are currently flowing into applications, this is far from human thought. For example,
Deep learning in some ways mimics what goes on in the human brain, but only in a shallow way—which perhaps explains why its intelligence can sometimes seem so shallow. Indeed, backprop wasn't discovered by probing deep into the brain, decoding thought itself; it grew out of models of how animals learn by trial and error in old classical-conditioning experiments. And most of the big leaps that came about as it developed didn't involve some new insight about neuroscience; they were technical improvements, reached by years of mathematics and engineering. What we know about intelligence is nothing against the vastness of what we still don't know.
David Duvenaud, an assistant professor in the same department as Hinton at the University of Toronto, says deep learning has been somewhat like engineering before physics. "Someone writes a paper and says, 'I made this bridge and it stood up!' Another guy has a paper: 'I made this bridge and it fell down—but then I added pillars, and then it stayed up.' Then pillars are a hot new thing. Someone comes up with arches, and it's like, 'Arches are great!'" With physics, he says, "you can actually understand what's going to work and why." Only recently, he says, have we begun to move into that phase of actual understanding with artificial intelligence.
Using the example of a two year old human who can recognize a hot dog after being exposed to a few of them, and comparing with the vast number of hot dog pictures required to train a deep learning system, the author goes in search for the next big thing...
(Score: 2) by TheLink on Wednesday October 04 2017, @05:38PM (2 children)
Yeah to me the AI field still seems stuck in the "Alchemy" stage. The practitioners can achieve useful stuff. And often it works well enough but nobody really knows why it works or more important how it would fail (there's some work and progress on the latter with "illusions" designed for neural networks).
Speaking of failure and illusions. You can see how intelligent something is behaving by the mistakes it makes when it makes them. The types of mistakes AIs make are proof that they do not actually understand - they are just guessing using statistics based from thousands or even millions of samples. Go look at the mistakes IBM's Watson makes. Compare them with the cognitive mistakes a dog makes. From the mistakes you know the dog has a lower limited understanding but what it does understand it actually knows. It has a working model of the world and other entities.
My guess is Strong AI would also have to model and simulate the outside world and themselves (and their choices), predict possible outcomes and attempt to pick a desirable outcome. Running very many multiple simulations is energy intensive (maybe even prohibitive) for conventional computers but may not be so for quantum computers. The accuracy of the result may be lower but that may not be so important for most real world scenarios.
Consciousness might be what happens when such a quantum simulator recursively predicts itself...
The pattern recognition and neural network stuff will still be important but more as a building block or interface (and maybe for trained reflexes/"muscle memory"). Don't forget - crows are quite smart but their brains are walnut-sized. So the bulk of a brain might not really be for thinking and intelligence. I believe that many single celled creatures can actually think ( https://soylentnews.org/comments.pl?sid=450&cid=11384#commentwrap [soylentnews.org] ) and that brains first evolved not to solve the thinking problem but other problems - like the interfacing problem- how to use and control multi-cellular bodies. So perhaps more scientists should investigate how these single celled creatures actually think.
p.s. I don't know much about AI but my bullshit on AI and related stuff doesn't seem that much worse than from those selling them ;)
(Score: 0) by Anonymous Coward on Wednesday October 04 2017, @09:00PM (1 child)
Hmm... plants don't have brains and their responses are the results of feedback loops based on internal and external signals. There is little thinking going on here, just programming with feedback loops.
(Score: 0) by Anonymous Coward on Thursday October 05 2017, @06:17PM