Submitted via IRC for SoyCow1337
How neural networks work—and why they've become a big business
The last decade has seen remarkable improvements in the ability of computers to understand the world around them. Photo software automatically recognizes people's faces. Smartphones transcribe spoken words into text. Self-driving cars recognize objects on the road and avoid hitting them.
Underlying these breakthroughs is an artificial intelligence technique called deep learning. Deep learning is based on neural networks, a type of data structure loosely inspired by networks of biological neurons. Neural networks are organized in layers, with inputs from one layer connected to outputs from the next layer.
Computer scientists have been experimenting with neural networks since the 1950s. But two big breakthroughs—one in 1986, the other in 2012—laid the foundation for today's vast deep learning industry. The 2012 breakthrough—the deep learning revolution—was the discovery that we can get dramatically better performance out of neural networks with not just a few layers but with many. That discovery was made possible thanks to the growing amount of both data and computing power that had become available by 2012.
This feature offers a primer on neural networks. We'll explain what neural networks are, how they work, and where they came from. And we'll explore why—despite many decades of previous research—neural networks have only really come into their own since 2012.
(Score: 3, Insightful) by SomeGuy on Saturday December 07 2019, @01:24PM (4 children)
It still boils down to throwing shit at a wall and seeing what sticks.
While that might be good enough for telling jokes over smart speakers, promising cake, advertising, septic tank cleaning, and web development, I would not want that doing anything critical.
(Score: 1, Insightful) by Anonymous Coward on Saturday December 07 2019, @02:54PM (2 children)
I feel like what’s called AI or Neural Network today kind of boils down to fancy statistics. Like throwing shit at the wall using a method that’s statistically like to stick in the pattern you were looking for. If it were actual simulation of networks of nodes that truly behaved like neurons, then we’d get something simulating true intelligence and thought. Those might kill all the humans, but would be less useful for businesses.
(Score: 0) by Anonymous Coward on Saturday December 07 2019, @05:23PM (1 child)
Even better: Throw shit at a wall and then train the NN to fit the resulting patterns until you can write the paper.
(Score: 0) by Anonymous Coward on Sunday December 08 2019, @05:29AM
i feel the same way.
NN aren't really a "solved formula" or definitiv algorithem, so it feels like cheating.
but by just considering a NN a complete machine, like a car engine for example or a tape recorder my perception moves it from the strict logical and mathematical realm into the "device" realm.
so who is to say that what a NN sees is wrong because if we look at the raw NN data it looks nothing like what we see but still the NN "guesses" correctly.
one "proof" that our (human) visual processing is flaw too can be seen (*sigh*) when presented with so-called optical illusions.
maybe todays NN are just a tool, like a microscope or telescope, to explore our (probably) rigid way (trained so) of seeing things.
i think using a well trained NN and looking at raw data (or what the NN see) will show us how rigid our own view of the world is ... how we are hardwired and how we have been trained by environment (mostly society?) to ... see things.
lastly: there might be acctual buttons (visual cues) hardwired into our brains and if so, the first to discover them might well be able to "press" these buttons to by-pass concious/aware decisions alltogether (think: advertisment or ... politics?).
the positive side might be that NN might help lift the "veil of accumulated garbage aesthetics" from our eyes and we learn to "look correctly" and pay attention to things that really matter, instead of things that ought to matter?
(Score: 3, Informative) by Anonymous Coward on Saturday December 07 2019, @03:21PM
> throwing shit at a wall and seeing what sticks
This.
Tfa mentions, "Computer scientists have been experimenting with neural networks since the 1950s." For anyone interested, one instance was called "Perceptron". From https://en.wikipedia.org/wiki/Perceptron [wikipedia.org] :
(Score: 2, Insightful) by Anonymous Coward on Saturday December 07 2019, @04:02PM (7 children)
>> The last decade has seen remarkable improvements
>> in the ability of computers to understand the world
>> around them.
The state of the art has gone from "zero understanding at all" to "zero understanding at all with improved pattern matching." Understanding would imply, well, understanding. That is still vaporware. So no.
>> Photo software automatically recognizes
>> people's faces.
Without understanding them, without understanding what a face is.
>> Smartphones transcribe spoken words into text.
Without an understanding of language and its underlying parts.
>> Self-driving cars recognize objects on the road
>> and avoid hitting them.
Without self-awareness, without any theoretical thoughts, without... understanding.
Pigeons can be taught to recognize words [livescience.com], but the body of pigeon literature is still notably nonextant because understanding of those words would be required. There isn't any. A similar thing is happening with computer algorithms.
Look, it's okay to say "Pattern matching is getting really good" without delusional statements like "Computers can now understand things much better," implying that they understand at all, which they don't.
I would, however, love to read a responsible opposing view.
(Score: 0) by Anonymous Coward on Saturday December 07 2019, @05:27PM
And even without much recognition quality. If pictures of gorillas have to be excluded from the training set to avoid coming to a politically incorrect match on a human, you can't really say the technology is very advanced yet.
(Score: 2) by bzipitidoo on Saturday December 07 2019, @06:21PM
What you're complaining about is standard media hype. They often dismiss as mundane and trivial things that really are impressive, because neither they nor most of their readers have the, uh, understanding to appreciate it. Just ignore it, and focus on the meat. Do computers "understand" that 2+2=4? What does it really mean to "understand", and what is intelligence? We don't have satisfying answers to those questions. But we can say that, no, dumb devices mechanically carrying out arithmetic do not understand math, any more than water understands which way downhill is, but they can do basic arithmetic at incredible speeds. It's astounding how rapidly computation speed has grown over the past several decades, and continues to grow today.
10 years ago, none of this image recognition was really possible. I certainly didn't know that breakthroughs in neural net computing were just around the corner, and that was going to lead to big advances in image recognition, despite having done a little work in the area. And, note, it wasn't one big breakthrough, it was a confluence of several modest seeming advances.
(Score: 1) by ze on Saturday December 07 2019, @09:16PM (4 children)
How do we know that "understanding" isn't anything more than a complex enough set of pattern-matching abilities? Humans are notorious pattern-matchers, and that's basically all we do when we start learning, along the way to developing understanding. So upon what basis can you assert that deeplearning isn't just passing through the same stage on the same trajectory?
(Score: 1, Funny) by Anonymous Coward on Sunday December 08 2019, @01:36AM (2 children)
>> How do we know that "understanding" isn't anything more
>> than a complex enough set of pattern-matching abilities?
We can observe that if it were no more than that, we would be able to do nothing more than match patterns.
That cuts out a great deal of art, engineering, socialization, philosophy, fishing, civilization, anything requiring thought.
There are two layers, the matching of the patterns, and the thinking about what has (or hasn't yet) been matched in ways that transcend matching and algorithms.
For example, some(one|thing) that is good at pattern matching might say "I have identified an oil spot on this photo of driveway. It looks similar to these other types of oil spots."
Some(one|thing) with understanding might further say "I bet that red truck parked at the curb is leaking oil, which is why they aren't parking it in that driveway right now." Even if it, he, she, or they had no prior knowledge of the red truck or its condition. While the really good pattern matcher is still looking for other similar things. Hey! This paint stain is a 76.22% match. Wait, this grass stain ranks pretty high too.
There are great breakthroughs in machine pattern matching, a hard problem. There have not yet been great breakthroughs in machine understanding, an exponentially harder problem.
Someone who, faced with that, says that machine understanding has made great progress, is not, imo, all that great at pattern matching.
(Score: 1) by ze on Sunday December 08 2019, @08:36AM (1 child)
I think you're missing the point. There's simple low-level, first-order pattern matching, and then there's bigger, higher-level complexes of it.
To me your argument is tantamount to the naive "sure, computers can manipulate numbers, but numbers can't show you a beautiful sunset" kind of attitude I actually heard growing up (even though digital images were already a thing by then, just not commonplace yet) when in fact they can, it just requires a higher order of complexity to do so.
I suspect this is just another example of humans overestimating how magical and special our minds are, when they really are mostly just a collection of sloppy heuristics.
So what is the actual argument against my idea that our understanding may be nothing more than just bigger, more complex constructions of matching many concurrent and interrelated patterns?
Lets take your oil spot example: beyond the low-level recognition of the pattern of the oil spot, there's higher-order recognitions of the patterns of vehicles leaking oil being a cause of oil spots, and a pattern of people preferring clean driveways, and so on. Each of these abstracts have a bunch of sub-patterns (that I've skipped detailing) making them up. It just looks like a bigger collection of correlated patterns to match, to me.
(Score: 1, Insightful) by Anonymous Coward on Sunday December 08 2019, @04:03PM
Although it seems to me that there are other processes involved that are not of the nature of pattern matching--something done with the patterns once matched or not matched--I do recognize that at finer and finer levels, this pattern matching process may approach (or even constitute) thought.
I don't think that's the case (I think there's more than one operation going on), but that's just my opinion, and I concede that it's an unfounded one at that.
(Score: 0) by Anonymous Coward on Monday December 09 2019, @10:58AM
By the mistakes they make and how many samples you need in order to teach them stuff.
I bet when you try to teach a kid, a dog or a crow the difference between a car and a bus, the sort of mistakes they make will show you that they actually have some understanding of the material. And they sure won't need zillions of photos and zillions of people telling them what is a bus, what is a car and what is a traffic light.
In contrast current AI is pretty stupid:
https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony/ [technologyreview.com]
Lots of it seem to be too dependent on textures:
https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/ [quantamagazine.org]
https://www.theverge.com/2017/11/2/16597276/google-ai-image-attacks-adversarial-turtle-rifle-3d-printed [theverge.com]
That so many AI "scientists" didn't even know why their stuff was making such mistakes shows you they're basically alchemists (mix certain stuff together in a certain way and it seems to work well enough BUT no real theory on why or how it works).
Here are quotes from some AI researchers which should tell you the state of current AI:
https://www.elsevier.com/connect/the-greatest-advances-in-ai-the-experts-view [elsevier.com]
Not saying current AI can't solve some problems well enough, but it really does seem to be at the alchemy stage at this point.
Maybe for an advance to be made we need to do stuff like fully figure out how the simplest stuff think:
https://www.ncbi.nlm.nih.gov/pubmed/24444111 [nih.gov]
The first brains were probably not so much for thinking but to solve the problems of interfacing and redundancy in multi-cellular organisms.
Single celled creatures were already "thinking" way before neurons and brains developed. Most probably weren't very smart (they don't need to be that smart) but lots of multi-cellular animals with brains aren't much smarter than the smarter single celled stuff.
(Score: 5, Funny) by ikanreed on Saturday December 07 2019, @05:57PM
This is exactly how deep neural networks work [imgur.com]
(Score: 5, Interesting) by JoeMerchant on Saturday December 07 2019, @07:06PM (3 children)
Neural nets and other machine learning techniques basically boil down to the same principles used to least-squares fit a line to a set of 2D data points. Just: increase the dimensions of the solution equation from 2D to N-D, increase the polynomial complexity by the number of hidden layers, and increase the training data volume until it has enough samples to capture the nuance that you spent the better part of 10 maybe 20 years learning.
You (human meatbag) have got a neural net with ~100 Billion elements arrayed roughly 6-16 layers deep, depending. It's not surprising that smaller neural nets can do some amazing things, think about how a wasp can fly, establish territory, identify and harass invaders, infest prey with parasitic eggs, build nests, mate, reproduce, etc. all with less than 5000 neurons. It's also not surprising that current image recognition neural nets aren't quite up to speed as compared to an average six year old - the six year old's net has had more input training and has many many orders of magnitude more neurons to work with.
Still, those arbitrary image recognition tests from 2008 that nobody's algorithms in the world could get better than a 25% error rate on back then? Yeah, the machines are pretty well slaying those now.
🌻🌻 [google.com]
(Score: 0) by Anonymous Coward on Monday December 09 2019, @11:16AM (2 children)
So can modern tech build a brain with 5000 neurons that does everything that a wasp's brain does? The brain, body and environment can be virtual. I want to see the wasp dodge attacks, avoid enemies etc, find mates, prey and shelter. It's "only" 5000 neurons right?
(Score: 2) by takyon on Monday December 09 2019, @11:37AM (1 child)
Probably.
https://en.wikipedia.org/wiki/Brain_simulation#Caenorhabditis_elegans_(roundworm) [wikipedia.org]
One thing is that just because we have the computing and storage resources doesn't mean that our simulations are particularly accurate. But at some point, these simulations will be "perfect" and done faster than real-time. The human brain could be targeted for slower than real-time simulations.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 0) by Anonymous Coward on Wednesday December 11 2019, @04:11PM
There's this too: http://openworm.org/ [openworm.org]
(Score: 3, Interesting) by FatPhil on Saturday December 07 2019, @10:42PM
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 0) by Anonymous Coward on Sunday December 08 2019, @02:15AM
Love to get one of these built in radios and televisions to recognize repetitive content, and forward it to me only once.