Submitted via IRC for SoyCow8317
Researchers have used a computational neural network, a form of artificial intelligence, to 'learn' how a nanoparticle's structure affects the way it scatters light, based on thousands of examples. The approach may help physicists tackle research problems in ways that could be orders of magnitude faster than existing methods.
The innovation uses computational neural networks, a form of artificial intelligence, to "learn" how a nanoparticle's structure affects its behavior, in this case the way it scatters different colors of light, based on thousands of training examples. Then, having learned the relationship, the program can essentially be run backward to design a particle with a desired set of light-scattering properties -- a process called inverse design.
The findings are being reported in the journal Science Advances, in a paper by MIT senior John Peurifoy, research affiliate Yichen Shen, graduate student Li Jing, professor of physics Marin Soljacic, and five others.
While the approach could ultimately lead to practical applications, Soljacic says, the work is primarily of scientific interest as a way of predicting the physical properties of a variety of nanoengineered materials without requiring the computationally intensive simulation processes that are typically used to tackle such problems.
Soljacic says that the goal was to look at neural networks, a field that has seen a lot of progress and generated excitement in recent years, to see "whether we can use some of those techniques in order to help us in our physics research. So basically, are computers 'intelligent' enough so that they can do some more intelligent tasks in helping us understand and work with some physical systems?"
To test the idea, they used a relatively simple physical system, Shen explains. "In order to understand which techniques are suitable and to understand the limits and how to best use them, we [used the neural network] on one particular system for nanophotonics, a system of spherically concentric nanoparticles." The nanoparticles are layered like an onion, but each layer is made of a different material and has a different thickness.
The nanoparticles have sizes comparable to the wavelengths of visible light or smaller, and the way light of different colors scatters off of these particles depends on the details of these layers and on the wavelength of the incoming beam. Calculating all these effects for nanoparticles with many layers can be an intensive computational task for many-layered nanoparticles, and the complexity gets worse as the number of layers grows.
The researchers wanted to see if the neural network would be able to predict the way a new particle would scatter colors of light -- not just by interpolating between known examples, but by actually figuring out some underlying pattern that allows the neural network to extrapolate.
"The simulations are very exact, so when you compare these with experiments they all reproduce each other point by point," says Peurifoy, who will be an MIT doctoral student next year. "But they are numerically quite intensive, so it takes quite some time. What we want to see here is, if we show a bunch of examples of these particles, many many different particles, to a neural network, whether the neural network can develop 'intuition' for it."
Sure enough, the neural network was able to predict reasonably well the exact pattern of a graph of light scattering versus wavelength -- not perfectly, but very close, and in much less time. The neural network simulations "now are much faster than the exact simulations," Jing says. "So now you could use a neural network instead of a real simulation, and it would give you a fairly accurate prediction. But it came with a price, and the price was that we had to first train the neural network, and in order to do that we had to produce a large number of examples."
Source: https://www.sciencedaily.com/releases/2018/06/180601160447.htm
(Score: 1) by Sulla on Tuesday June 05 2018, @06:25PM
I always knew you could spread A1 on everything
Ceterum censeo Sinae esse delendam
(Score: 1, Informative) by Anonymous Coward on Tuesday June 05 2018, @09:35PM (1 child)
I am no expert on AI, just a programmer with old-school EE training, with no exposure to AI other than the primitive "expert systems" in the 80s. But it seems the term "AI" has all-encompassing term for omni-potentially-potent tech to do anything and everything.
Instead of "AI", we should spell out specific technique employed, i.e., neural network, Google search engine (piled on big data), well, that's all I can think of but those of you in the field can list many other specialized approaches. I mean, the algorithms for these walking/jumping/climbing robots are different from face recognition algorithms. If you pile them altogether as "AI", might as well simply call them "magic".
(Score: 0) by Anonymous Coward on Saturday June 09 2018, @12:43PM
``The exciting new effort to make computers think\ldots \textit{machines with minds}, in the full and literal sense.''
``[The automation of] activities that we associate with human thinking, activities such as decision-making, problem solving, learning\ldots''
``The art of creating machines that perform functions that require intelligence when performed by people.''
``The study of how to make computers do things at which, at the moment, people are better.''
``The study of mental faculties through the use of computational models.''
``The study of the computations that make it possible to perceive, reason, and act.''
``Computational Intelligence is the study of the design of intelligent agents.''
``AI\ldots is concerned with intelligent behaviour in artifacts.''
All lifted from AIMA page 2 if you want the citations.