Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday July 07 2019, @05:10AM   Printer-friendly
from the ¯\_(ツ)_/¯ dept.

Submitted via IRC for Bytram

That this AI can simulate universes in 30ms is not the scary part. It's that its creators don't know why it works so well

Neural networks can build 3D simulations of the universe in milliseconds, compared to days or weeks when using traditional supercomputing methods, according to new research.

To study how stuff interacts in space, scientists typically build computational models to simulate the cosmos. One simulation approach – known as N-body simulation – can be used to recreate phenomena ranging from smaller events, such as the collapse of molecular clouds into stars, to a giant system, such as the whole universe, obviously to varying levels of accuracy and resolution.

The individual interactions between each of the millions or billions of particles or entities in these models have to be repeatedly calculated to track their motion over time. This requires heavy amounts of compute power, and it takes several days or weeks for a supercomputer to return the results.

For impatient boffins, there's now some good news. A group of physicists, led by eggheads at the Center for Computational Astrophysics at the Flatiron Institute in New York, USA, decided to see if neural networks could speed things up a bit.

[...] The accuracy of the neural network is judged by how similar its outputs were to the ones created by two more traditional N-body simulation systems, FastPM and 2LPT, when all three are given the same inputs. When D3M was tasked with producing 1,000 simulations from 1,000 sets of input data, it had a relative error of 2.8 per cent compared to FastPM, and a 9.3 per cent compared to 2LPT for the same inputs. That's not too bad, considering it takes the model just 30 milliseconds to crank out a simulation. Not only does that save time, but it's also cheaper too since less compute power is needed.

To their surprise, the researchers also noticed that D3M seemed to be able to produce simulations of the universe from conditions that weren't specifically included in the training data. During inference tests, the team tweaked input variables such as the amount of dark matter in the virtual universes, and the model still managed to spit out accurate simulations despite not being specifically trained for these changes.

"It's like teaching image recognition software with lots of pictures of cats and dogs, but then it's able to recognize elephants," said Shirley Ho, first author of the paper and a group leader at the Flatiron Institute. "Nobody knows how it does this, and it's a great mystery to be solved.

"We can be an interesting playground for a machine learner to use to see why this model extrapolates so well, why it extrapolates to elephants instead of just recognizing cats and dogs. It's a two-way street between science and deep learning."

The source code for the neural networks can be found here. ®


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Informative) by sshelton76 on Sunday July 07 2019, @07:18AM (1 child)

    by sshelton76 (7978) on Sunday July 07 2019, @07:18AM (#864049)

    The scariest part is that they don't know why this works and yet even a lay person could easily understand what is happening here.
    They trained the NN to be a bulk statistical model and that model happens to fit the data well. That's it.

    Statistics are strange in that if something has a 99.8% probability of occurring then it's very likely to occur, but if something has a 0.2% probability of occuring, it is very unlikely to have occurred.

    Now let's say event A has a 0.2% probability of occurring on it's own, but if event B occurs, then it means that A MUST have occurred despite being statistically unlikely, so you update the model to act as though A occurred, but was unobserved. Any other effects that might proceed from A, then begin to follow as per normal.

    This is a called a hidden variable and finding hidden variables is exactly what NN's excel at.

    So their NN is able to model universes because the laws of physics have all kinds of hidden variables and correlations. When we run calculations by hand we have to manually account for these things and that's why the math and the computational space becomes so complex. But give a NN input A, have it correlate with output B, rinse, repeat enough times and the network will begin taking statistical shortcuts and those shortcuts will be "good enough" to get very close to calculations using the full suite of equations and steps. This is because physics is just bulk statistics and NNs are statistical machines.

    Starting Score:    1  point
    Moderation   +3  
       Interesting=1, Informative=2, Total=3
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 1, Insightful) by Anonymous Coward on Sunday July 07 2019, @05:12PM

    by Anonymous Coward on Sunday July 07 2019, @05:12PM (#864165)

    Well if they actually knew how it worked and explained it they probably wouldn't have got as much publicity. ;)