Photonic circuits are a very promising technology for neural networks because they make it possible to build energy-efficient computing units. For years, the Politecnico di Milano has been working on developing programmable photonic processors integrated on silicon microchips only a few mm2 in size for use in the field of data transmission and processing, and now these devices are being used to build photonic neural networks:
"An artificial neuron, like a biological neuron, must perform very simple mathematical operations, such as addition and multiplication, but in a neural network consisting of many densely interconnected neurons, the energy cost of these operations grows exponentially and quickly becomes prohibitive. Our chip incorporates a photonic accelerator that allows calculations to be carried out very quickly and efficiently, using a programmable grid of silicon interferometers. The calculation time is equal to the transit time of light in a chip a few millimeters in size, so we are talking about less than a billionth of a second (0.1 nanoseconds)," says Francesco Morichetti, Head of the Photonic Devices Lab of the Politecnico di Milano.
"The advantages of photonic neural networks have long been known, but one of the missing pieces to fully exploit their potential was network training.. It is like having a powerful calculator, but not knowing how to use it. In this study, we succeeded in implementing training strategies for photonic neurons similar to those used for conventional neural networks. The photonic 'brain' learns quickly and accurately and can achieve precision comparable to that of a conventional neural network, but faster and with considerable energy savings. These are all building blocks for artificial intelligence and quantum applications," adds Andrea Melloni, Director of Polifab the Politecnico di Milano micro and nanotechnology center.
Originally spotted on The Eponymous Pickle.
Journal Reference: Sunil Pai et al, Experimentally realized in situ backpropagation for deep learning in photonic neural networks, Science (2023). DOI: 10.1126/science.ade8450
Related: New Chip Can Process and Classify Nearly Two Billion Images Per Second
« Doctors Have Performed Brain Surgery on a Fetus in One of the First Operations of its Kind | Babylon 5 Animated Movie »
Related Stories
New Chip Can Process and Classify Nearly Two Billion Images per Second - Technology Org:
In traditional neural networks used for image recognition, the image of the target object is first formed on an image sensor, such as the digital camera in a smartphone. Then, the image sensor converts light into electrical signals, and ultimately into binary data, which can then be processed, analyzed, stored, and classified using computer chips. Speeding up these abilities is key to improving any number of applications, such as face recognition, automatically detecting text in photos, or helping self-driving cars recognize obstacles.
[...] The current speed limit of these technologies is set by the clock-based schedule of computation steps in a computer processor, where computations occur one after another on a linear schedule.
To address this limitation, [...] have removed the four main time-consuming culprits in the traditional computer chip: the conversion of optical to electrical signals, the need for converting the input data to binary format, a large memory module, and clock-based computations.
They have achieved this through direct processing of light received from the object of interest using an optical deep neural network implemented on a 9.3 square millimeter chip.
[...] "Our chip processes information through what we call 'computation-by-propagation,' meaning that, unlike clock-based systems, computations occur as light propagates through the chip," says Aflatouni. "We are also skipping the step of converting optical signals to electrical signals because our chip can read and process optical signals directly, and both of these changes make our chip a significantly faster technology."
(Score: 2) by cosurgi on Friday May 05, @04:17PM
I suspect that there is a big problem: when passing a light through the network to obtain AI's answer it's like the AI is asked something for the first time. It won't remember the previous conversation or even the previous sentence. It will be always the "first reaction" to something.
#
#\ @ ? [adom.de] Colonize Mars [kozicki.pl]
#
(Score: 0) by Anonymous Coward on Friday May 05, @05:10PM (3 children)
But do biological neurons only[1] perform very simple math ops?
Already white blood cells seem to do fairly complicated[2] behaviors so how can we be so sure that cells specialized for thinking would only do simple math ops? Neurons are also fairly high power consumption compared to most other cells. They probably do simple ops most of the time but so do most humans especially when there's no need to deviate from their routine... I'm pretty sure the brain and neuron power consumption is different when you're learning to drive vs when you're driving as a very experienced driver and nothing out of ordinary is happening.
[1] I know the sentence itself doesn't claim "only" but the overall premise and assumption seem to.
[2] https://scitechdaily.com/squishy-white-blood-cells-quickly-morph-to-become-highly-stiff-and-viscous-in-response-to-threats/ [scitechdaily.com]
https://nihrecord.nih.gov/2022/05/27/insight-neutrophil-behavior-yields-clues-about-inflammation [nih.gov]
https://youtu.be/V61n8a6dpOo [youtu.be]
(Score: 2) by maxwell demon on Friday May 05, @06:45PM (1 child)
I guess that depends on the scale you are looking at.
In the short term, to my understanding (which may however be wrong, since I'm not an expert in that field) it is indeed just quite simple operations. Basically signals are transported (rather slowly, as the transport is not like current in a wire, but a very active process, which probably explains the high energy consumption, especially since typically a certain amount of signal also is sent in the "neutral" state), and at the synapses translated into chemical signals which then in turn are collected by the next neuron. I guess that can be modelled by relatively simple mathematical formulas.
In the long term, the brain grows new synapses or removes existing ones; I strongly doubt that this is a mathematically simple process (I have no idea how the brain decides where to grow a new synapse, though).
The Tao of math: The numbers you can count are not the real numbers.
(Score: 0) by Anonymous Coward on Saturday May 06, @02:20AM
Yeah I'm pretty sure that bit ain't simple math too...
So similarly in most cases the neuron is going - nothing new, if A happens, I do B. It's like the using artificial neural nets with the already trained weights.
Yes it seems to behave more intelligently after the "correct" weights are assigned but the real intelligence is in how the weights and connections got assigned in the first place.
But I guess for AI applications the usage of trained networks can still consume quite a lot of power? So this photonic stuff is supposed to help.
I'm pretty impressed by crow brains - small brains and yet relatively very intelligent.
(Score: 3, Informative) by HiThere on Friday May 05, @08:56PM
No. Neural nets are based on an abstracted model of what the biological nets are doing. They certainly aren't complete. The question is, "Did the abstraction capture all the relevant facts?", and the answer to that probably depends on what you are trying to do. E.g. it certainly didn't capture the way endocrine changes alter the signaling, but for many purposes that may not be important.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.