Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday December 07 2019, @12:53PM   Printer-friendly
from the machine-learning dept.

Submitted via IRC for SoyCow1337

How neural networks work—and why they've become a big business

The last decade has seen remarkable improvements in the ability of computers to understand the world around them. Photo software automatically recognizes people's faces. Smartphones transcribe spoken words into text. Self-driving cars recognize objects on the road and avoid hitting them.

Underlying these breakthroughs is an artificial intelligence technique called deep learning. Deep learning is based on neural networks, a type of data structure loosely inspired by networks of biological neurons. Neural networks are organized in layers, with inputs from one layer connected to outputs from the next layer.

Computer scientists have been experimenting with neural networks since the 1950s. But two big breakthroughs—one in 1986, the other in 2012—laid the foundation for today's vast deep learning industry. The 2012 breakthrough—the deep learning revolution—was the discovery that we can get dramatically better performance out of neural networks with not just a few layers but with many. That discovery was made possible thanks to the growing amount of both data and computing power that had become available by 2012.

This feature offers a primer on neural networks. We'll explain what neural networks are, how they work, and where they came from. And we'll explore why—despite many decades of previous research—neural networks have only really come into their own since 2012.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Insightful) by SomeGuy on Saturday December 07 2019, @01:24PM (4 children)

    by SomeGuy (5632) on Saturday December 07 2019, @01:24PM (#929379)

    It still boils down to throwing shit at a wall and seeing what sticks.

    While that might be good enough for telling jokes over smart speakers, promising cake, advertising, septic tank cleaning, and web development, I would not want that doing anything critical.

    • (Score: 1, Insightful) by Anonymous Coward on Saturday December 07 2019, @02:54PM (2 children)

      by Anonymous Coward on Saturday December 07 2019, @02:54PM (#929395)

      I feel like what’s called AI or Neural Network today kind of boils down to fancy statistics. Like throwing shit at the wall using a method that’s statistically like to stick in the pattern you were looking for. If it were actual simulation of networks of nodes that truly behaved like neurons, then we’d get something simulating true intelligence and thought. Those might kill all the humans, but would be less useful for businesses.

      • (Score: 0) by Anonymous Coward on Saturday December 07 2019, @05:23PM (1 child)

        by Anonymous Coward on Saturday December 07 2019, @05:23PM (#929444)

        Like throwing shit at the wall using a method that’s statistically like to stick in the pattern you were looking for.

        Even better: Throw shit at a wall and then train the NN to fit the resulting patterns until you can write the paper.

        • (Score: 0) by Anonymous Coward on Sunday December 08 2019, @05:29AM

          by Anonymous Coward on Sunday December 08 2019, @05:29AM (#929637)

          i feel the same way.
          NN aren't really a "solved formula" or definitiv algorithem, so it feels like cheating.
          but by just considering a NN a complete machine, like a car engine for example or a tape recorder my perception moves it from the strict logical and mathematical realm into the "device" realm.
          so who is to say that what a NN sees is wrong because if we look at the raw NN data it looks nothing like what we see but still the NN "guesses" correctly.
          one "proof" that our (human) visual processing is flaw too can be seen (*sigh*) when presented with so-called optical illusions.
          maybe todays NN are just a tool, like a microscope or telescope, to explore our (probably) rigid way (trained so) of seeing things.
          i think using a well trained NN and looking at raw data (or what the NN see) will show us how rigid our own view of the world is ... how we are hardwired and how we have been trained by environment (mostly society?) to ... see things.
          lastly: there might be acctual buttons (visual cues) hardwired into our brains and if so, the first to discover them might well be able to "press" these buttons to by-pass concious/aware decisions alltogether (think: advertisment or ... politics?).
          the positive side might be that NN might help lift the "veil of accumulated garbage aesthetics" from our eyes and we learn to "look correctly" and pay attention to things that really matter, instead of things that ought to matter?

    • (Score: 3, Informative) by Anonymous Coward on Saturday December 07 2019, @03:21PM

      by Anonymous Coward on Saturday December 07 2019, @03:21PM (#929405)

      > throwing shit at a wall and seeing what sticks
      This.

      Tfa mentions, "Computer scientists have been experimenting with neural networks since the 1950s." For anyone interested, one instance was called "Perceptron". From https://en.wikipedia.org/wiki/Perceptron [wikipedia.org] :

      The perceptron algorithm was invented in 1958 at the Cornell Aeronautical Laboratory by Frank Rosenblatt,[3] funded by the United States Office of Naval Research.[4]

      The perceptron was intended to be a machine, rather than a program, and while its first implementation was in software for the IBM 704, it was subsequently implemented in custom-built hardware as the "Mark 1 perceptron". This machine was designed for image recognition: it had an array of 400 photocells, randomly connected to the "neurons". Weights were encoded in potentiometers, and weight updates during learning were performed by electric motors.[2]:193

      In a 1958 press conference organized by the US Navy, Rosenblatt made statements about the perceptron that caused a heated controversy among the fledgling AI community; based on Rosenblatt's statements, The New York Times reported the perceptron to be "the embryo of an electronic computer that [the Navy] expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."[4]

      Although the perceptron initially seemed promising, it was quickly proved that perceptrons could not be trained to recognise many classes of patterns. This caused the field of neural network research to stagnate for many years, before it was recognised that a feedforward neural network with two or more layers (also called a multilayer perceptron) had far greater processing power than perceptrons with one layer (also called a single layer perceptron).[dubious – discuss] Single layer perceptrons are only capable of learning linearly separable patterns; in 1969 a famous book entitled Perceptrons by Marvin Minsky and Seymour Papert showed that it was impossible for these classes of network to learn an XOR function. It is often believed (incorrectly) that they also conjectured that a similar result would hold for a multi-layer perceptron network. However, this is not true, as both Minsky and Papert already knew that multi-layer perceptrons were capable of producing an XOR function. (See the page on Perceptrons (book) for more information.) Nevertheless, the often-miscited Minsky/Papert text caused a significant decline in interest and funding of neural network research. It took ten more years until neural network research experienced a resurgence in the 1980s. This text was reprinted in 1987 as "Perceptrons - Expanded Edition" where some errors in the original text are shown and corrected.

  • (Score: 2, Insightful) by Anonymous Coward on Saturday December 07 2019, @04:02PM (7 children)

    by Anonymous Coward on Saturday December 07 2019, @04:02PM (#929421)

    >> The last decade has seen remarkable improvements
    >> in the ability of computers to understand the world
    >> around them.

    The state of the art has gone from "zero understanding at all" to "zero understanding at all with improved pattern matching." Understanding would imply, well, understanding. That is still vaporware. So no.

    >> Photo software automatically recognizes
    >> people's faces.

    Without understanding them, without understanding what a face is.

    >> Smartphones transcribe spoken words into text.

    Without an understanding of language and its underlying parts.

    >> Self-driving cars recognize objects on the road
    >> and avoid hitting them.

    Without self-awareness, without any theoretical thoughts, without... understanding.

    Pigeons can be taught to recognize words [livescience.com], but the body of pigeon literature is still notably nonextant because understanding of those words would be required. There isn't any. A similar thing is happening with computer algorithms.

    Look, it's okay to say "Pattern matching is getting really good" without delusional statements like "Computers can now understand things much better," implying that they understand at all, which they don't.

    I would, however, love to read a responsible opposing view.

    • (Score: 0) by Anonymous Coward on Saturday December 07 2019, @05:27PM

      by Anonymous Coward on Saturday December 07 2019, @05:27PM (#929447)

      Without self-awareness, without any theoretical thoughts, without... understanding.

      And even without much recognition quality. If pictures of gorillas have to be excluded from the training set to avoid coming to a politically incorrect match on a human, you can't really say the technology is very advanced yet.

    • (Score: 2) by bzipitidoo on Saturday December 07 2019, @06:21PM

      by bzipitidoo (4388) on Saturday December 07 2019, @06:21PM (#929462) Journal

      What you're complaining about is standard media hype. They often dismiss as mundane and trivial things that really are impressive, because neither they nor most of their readers have the, uh, understanding to appreciate it. Just ignore it, and focus on the meat. Do computers "understand" that 2+2=4? What does it really mean to "understand", and what is intelligence? We don't have satisfying answers to those questions. But we can say that, no, dumb devices mechanically carrying out arithmetic do not understand math, any more than water understands which way downhill is, but they can do basic arithmetic at incredible speeds. It's astounding how rapidly computation speed has grown over the past several decades, and continues to grow today.

      10 years ago, none of this image recognition was really possible. I certainly didn't know that breakthroughs in neural net computing were just around the corner, and that was going to lead to big advances in image recognition, despite having done a little work in the area. And, note, it wasn't one big breakthrough, it was a confluence of several modest seeming advances.

    • (Score: 1) by ze on Saturday December 07 2019, @09:16PM (4 children)

      by ze (8197) on Saturday December 07 2019, @09:16PM (#929517)

      How do we know that "understanding" isn't anything more than a complex enough set of pattern-matching abilities? Humans are notorious pattern-matchers, and that's basically all we do when we start learning, along the way to developing understanding. So upon what basis can you assert that deeplearning isn't just passing through the same stage on the same trajectory?

      • (Score: 1, Funny) by Anonymous Coward on Sunday December 08 2019, @01:36AM (2 children)

        by Anonymous Coward on Sunday December 08 2019, @01:36AM (#929583)

        >> How do we know that "understanding" isn't anything more
        >> than a complex enough set of pattern-matching abilities?

        We can observe that if it were no more than that, we would be able to do nothing more than match patterns.

        That cuts out a great deal of art, engineering, socialization, philosophy, fishing, civilization, anything requiring thought.

        There are two layers, the matching of the patterns, and the thinking about what has (or hasn't yet) been matched in ways that transcend matching and algorithms.

        For example, some(one|thing) that is good at pattern matching might say "I have identified an oil spot on this photo of driveway. It looks similar to these other types of oil spots."

        Some(one|thing) with understanding might further say "I bet that red truck parked at the curb is leaking oil, which is why they aren't parking it in that driveway right now." Even if it, he, she, or they had no prior knowledge of the red truck or its condition. While the really good pattern matcher is still looking for other similar things. Hey! This paint stain is a 76.22% match. Wait, this grass stain ranks pretty high too.

        There are great breakthroughs in machine pattern matching, a hard problem. There have not yet been great breakthroughs in machine understanding, an exponentially harder problem.

        Someone who, faced with that, says that machine understanding has made great progress, is not, imo, all that great at pattern matching.

        • (Score: 1) by ze on Sunday December 08 2019, @08:36AM (1 child)

          by ze (8197) on Sunday December 08 2019, @08:36AM (#929658)

          I think you're missing the point. There's simple low-level, first-order pattern matching, and then there's bigger, higher-level complexes of it.
          To me your argument is tantamount to the naive "sure, computers can manipulate numbers, but numbers can't show you a beautiful sunset" kind of attitude I actually heard growing up (even though digital images were already a thing by then, just not commonplace yet) when in fact they can, it just requires a higher order of complexity to do so.
          I suspect this is just another example of humans overestimating how magical and special our minds are, when they really are mostly just a collection of sloppy heuristics.
          So what is the actual argument against my idea that our understanding may be nothing more than just bigger, more complex constructions of matching many concurrent and interrelated patterns?
          Lets take your oil spot example: beyond the low-level recognition of the pattern of the oil spot, there's higher-order recognitions of the patterns of vehicles leaking oil being a cause of oil spots, and a pattern of people preferring clean driveways, and so on. Each of these abstracts have a bunch of sub-patterns (that I've skipped detailing) making them up. It just looks like a bigger collection of correlated patterns to match, to me.

          • (Score: 1, Insightful) by Anonymous Coward on Sunday December 08 2019, @04:03PM

            by Anonymous Coward on Sunday December 08 2019, @04:03PM (#929732)

            Although it seems to me that there are other processes involved that are not of the nature of pattern matching--something done with the patterns once matched or not matched--I do recognize that at finer and finer levels, this pattern matching process may approach (or even constitute) thought.

            I don't think that's the case (I think there's more than one operation going on), but that's just my opinion, and I concede that it's an unfounded one at that.

      • (Score: 0) by Anonymous Coward on Monday December 09 2019, @10:58AM

        by Anonymous Coward on Monday December 09 2019, @10:58AM (#930000)

        By the mistakes they make and how many samples you need in order to teach them stuff.

        I bet when you try to teach a kid, a dog or a crow the difference between a car and a bus, the sort of mistakes they make will show you that they actually have some understanding of the material. And they sure won't need zillions of photos and zillions of people telling them what is a bus, what is a car and what is a traffic light.

        In contrast current AI is pretty stupid:
        https://www.technologyreview.com/s/608911/is-ai-riding-a-one-trick-pony/ [technologyreview.com]

        Lots of it seem to be too dependent on textures:
        https://www.quantamagazine.org/where-we-see-shapes-ai-sees-textures-20190701/ [quantamagazine.org]
        https://www.theverge.com/2017/11/2/16597276/google-ai-image-attacks-adversarial-turtle-rifle-3d-printed [theverge.com]
        That so many AI "scientists" didn't even know why their stuff was making such mistakes shows you they're basically alchemists (mix certain stuff together in a certain way and it seems to work well enough BUT no real theory on why or how it works).

        Here are quotes from some AI researchers which should tell you the state of current AI:
        https://www.elsevier.com/connect/the-greatest-advances-in-ai-the-experts-view [elsevier.com]

        "The greatest advances are probably the ones we haven’t done yet. At the moment we depend too much on the stochastic/probabilistic approach to AI."

        it’s like asking me in 1600 what the greatest advance in chemistry was. I don’t know – in many ways in AI we’re still trying to do alchemy.”

        Not saying current AI can't solve some problems well enough, but it really does seem to be at the alchemy stage at this point.

        Maybe for an advance to be made we need to do stuff like fully figure out how the simplest stuff think:
        https://www.ncbi.nlm.nih.gov/pubmed/24444111 [nih.gov]

        How a P. chromatophora cell makes this shell is still a mystery. We examined shell construction process in P. chromatophora in detail using time-lapse video microscopy. The new shell was constructed by a specialized pseudopodium that laid out each scale into correct position, one scale at a time. The present study inferred that the sequence of scale production and secretion was well controlled.

        The first brains were probably not so much for thinking but to solve the problems of interfacing and redundancy in multi-cellular organisms.
        Single celled creatures were already "thinking" way before neurons and brains developed. Most probably weren't very smart (they don't need to be that smart) but lots of multi-cellular animals with brains aren't much smarter than the smarter single celled stuff.

  • (Score: 5, Funny) by ikanreed on Saturday December 07 2019, @05:57PM

    by ikanreed (3164) Subscriber Badge on Saturday December 07 2019, @05:57PM (#929455) Journal
  • (Score: 5, Interesting) by JoeMerchant on Saturday December 07 2019, @07:06PM (3 children)

    by JoeMerchant (3937) on Saturday December 07 2019, @07:06PM (#929479)

    Neural nets and other machine learning techniques basically boil down to the same principles used to least-squares fit a line to a set of 2D data points. Just: increase the dimensions of the solution equation from 2D to N-D, increase the polynomial complexity by the number of hidden layers, and increase the training data volume until it has enough samples to capture the nuance that you spent the better part of 10 maybe 20 years learning.

    You (human meatbag) have got a neural net with ~100 Billion elements arrayed roughly 6-16 layers deep, depending. It's not surprising that smaller neural nets can do some amazing things, think about how a wasp can fly, establish territory, identify and harass invaders, infest prey with parasitic eggs, build nests, mate, reproduce, etc. all with less than 5000 neurons. It's also not surprising that current image recognition neural nets aren't quite up to speed as compared to an average six year old - the six year old's net has had more input training and has many many orders of magnitude more neurons to work with.

    Still, those arbitrary image recognition tests from 2008 that nobody's algorithms in the world could get better than a 25% error rate on back then? Yeah, the machines are pretty well slaying those now.

    --
    🌻🌻 [google.com]
    • (Score: 0) by Anonymous Coward on Monday December 09 2019, @11:16AM (2 children)

      by Anonymous Coward on Monday December 09 2019, @11:16AM (#930002)
      I think a lot of the brain is used for interfacing and redundancy. As you point out some wasps are fairly smart and have around 5000 neurons. Crows and parrots with brains much smaller than mammals are pretty smart too.

      So can modern tech build a brain with 5000 neurons that does everything that a wasp's brain does? The brain, body and environment can be virtual. I want to see the wasp dodge attacks, avoid enemies etc, find mates, prey and shelter. It's "only" 5000 neurons right?
  • (Score: 3, Interesting) by FatPhil on Saturday December 07 2019, @10:42PM

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Saturday December 07 2019, @10:42PM (#929544) Homepage
    https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 0) by Anonymous Coward on Sunday December 08 2019, @02:15AM

    by Anonymous Coward on Sunday December 08 2019, @02:15AM (#929602)

    Love to get one of these built in radios and televisions to recognize repetitive content, and forward it to me only once.

(1)