Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday February 24 2020, @09:52AM   Printer-friendly
from the cutting-through-the-noise dept.

Mathematicians propose new way of using neural networks to work with noisy, high-dimensional data:

Mathematicians from RUDN University and the Free University of Berlin have proposed a new approach to studying the probability distributions of observed data using artificial neural networks. The new approach works better with so-called outliers, i.e., input data objects that deviate significantly from the overall sample. The article was published in the journal Artificial Intelligence.

The restoration of the probability distribution of observed data by artificial neural networks is the most important part of machine learning. The probability distribution not only allows us to predict the behaviour of the system under study, but also to quantify the uncertainty with which forecasts are made. The main difficulty is that, as a rule, only the data are observed, but their exact probability distributions are not available. To solve this problem, Bayesian and other similar approximate methods are used. But their use increases the complexity of a neural network and therefore makes its training more complicated.

RUDN University and the Free University of Berlin mathematicians used deterministic weights in neural networks, which would help overcome the limitations of Bayesian methods. They developed a formula that allows one to correctly estimate the variance of the distribution of observed data. The proposed model was tested on different data: synthetic and real; on data containing outliers and on data from which the outliers were removed. The new method allows restoration of probability distributions with accuracy previously unachievable.

Pavel Gurevich et al.Gradient conjugate priors and multi-layer neural networks, Artificial Intelligence (2019). DOI: 10.1016/j.artint.2019.103184


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Funny) by shrewdsheep on Monday February 24 2020, @12:32PM (6 children)

    by shrewdsheep (5215) on Monday February 24 2020, @12:32PM (#961795)

    The article is a rather technical piece dealing with how the posterior distribution is approximated in Bayesian models. The authors propose a modification in a standard iterative procedure which they praise in usual academic style. The neural network part is a simple (or poor, depending on POV) bait and serves to increase the appetite of the target journal.

    Maybe the submitter can elaborate?

    • (Score: 3, Interesting) by opinionated_science on Monday February 24 2020, @12:53PM (5 children)

      by opinionated_science (4031) on Monday February 24 2020, @12:53PM (#961800)

      I read the paper and my first thought was they were generalising something we *currently* to by empirical means.

      It turns out, just like machine learning is not AI, using NN in high-dimensional data is really not that new.

      What's new is the astonishing hardware you can buy, software that runs on it, that can *solve* some of these problems in useful time.

      If you want to look for the very best applied maths, look at physics and the work to generate simulations.

      Some of that maths was done 200 years ago, by minds that are hard to fathom in today's world.

      This is the problem with the academic pyramid scheme. To justify your position you have to ignore everyone else's attempt to stand in the same intellectual space.
      Citing a dead person is politically unreliable...;-)

      • (Score: 0) by Anonymous Coward on Monday February 24 2020, @01:16PM (3 children)

        by Anonymous Coward on Monday February 24 2020, @01:16PM (#961809)

        I can't access the paper, only the abstract. I wonder how well their method works in practice because their key assumption that the ground truth distribution is normal does not apply in many real-world datasets. Are you able to share a copy of the paper somehow?

        • (Score: 1) by shrewdsheep on Monday February 24 2020, @02:29PM (2 children)

          by shrewdsheep (5215) on Monday February 24 2020, @02:29PM (#961829)

          duckduckgo the title, and you get: https://arxiv.org/pdf/1802.02643.pdf [arxiv.org]

          • (Score: 0) by Anonymous Coward on Monday February 24 2020, @02:35PM (1 child)

            by Anonymous Coward on Monday February 24 2020, @02:35PM (#961832)

            Thanks. Recently it seems impossible to access arxiv from Japan. For some reason, it keeps giving me an access error PR_END_OF_FILE_ERROR
            I'll keep trying anyway...

            • (Score: 0) by Anonymous Coward on Monday February 24 2020, @03:24PM

              by Anonymous Coward on Monday February 24 2020, @03:24PM (#961854)

              It is loading a bit slow for me in USA.

              Here is the abstract page:

              https://arxiv.org/abs/1802.02643 [arxiv.org]

      • (Score: 0) by Anonymous Coward on Tuesday February 25 2020, @05:41AM

        by Anonymous Coward on Tuesday February 25 2020, @05:41AM (#962243)

        Physicists are fuckin' pricks too. Don't expect many truth seekers in that space either.

(1)