from the we've-come-a-long-way-since-the-box-camera dept.
This camera captures 156.3 trillion frames per second:
Scientists have created a blazing-fast scientific camera that shoots images at an encoding rate of 156.3 terahertz (THz) to individual pixels — equivalent to 156.3 trillion frames per second. Dubbed SCARF (swept-coded aperture real-time femtophotography), the research-grade camera could lead to breakthroughs in fields studying micro-events that come and go too quickly for today's most expensive scientific sensors.
SCARF has successfully captured ultrafast events like absorption in a semiconductor and the demagnetization of a metal alloy. The research could open new frontiers in areas as diverse as shock wave mechanics or developing more effective medicine.
Leading the research team was Professor Jinyang Liang of Canada's Institut national de la recherche scientifique (INRS). He's a globally recognized pioneer in ultrafast photography who built on his breakthroughs from a separate study six years ago. The current research was published in Nature, summarized in a press release from INRS and first reported on by Science Daily.
Professor Liang and company tailored their research as a fresh take on ultrafast cameras. Typically, these systems use a sequential approach: capture frames one at a time and piece them together to observe the objects in motion. But that approach has limitations. "For example, phenomena such as femtosecond laser ablation, shock-wave interaction with living cells, and optical chaos cannot be studied this way," Liang said.
The new camera builds on Liang's previous research to upend traditional ultrafast camera logic. "SCARF overcomes these challenges," INRS communication officer Julie Robert wrote in a statement. "Its imaging modality enables ultrafast sweeping of a static coded aperture while not shearing the ultrafast phenomenon. This provides full-sequence encoding rates of up to 156.3 THz to individual pixels on a camera with a charge-coupled device (CCD). These results can be obtained in a single shot at tunable frame rates and spatial scales in both reflection and transmission modes."
In extremely simplified terms, that means the camera uses a computational imaging modality to capture spatial information by letting light enter its sensor at slightly different times. Not having to process the spatial data at the moment is part of what frees the camera to capture those extremely quick "chirped" laser pulses at up to 156.3 trillion times per second. The images' raw data can then be processed by a computer algorithm that decodes the time-staggered inputs, transforming each of the trillions of frames into a complete picture.
Remarkably, it did so "using off-the-shelf and passive optical components," as the paper describes. The team describes SCARF as low-cost with low power consumption and high measurement quality compared to existing techniques.
Although SCARF is focused more on research than consumers, the team is already working with two companies, Axis Photonique and Few-Cycle, to develop commercial versions, presumably for peers at other higher learning or scientific institutions.
For a more technical explanation of the camera and its potential applications, you can view the full paper in Nature.
(Score: 4, Funny) by Anonymous Coward on Saturday March 30 2024, @03:46AM (1 child)
Sure, but playback is a bitch!
(Score: 4, Interesting) by nostyle on Saturday March 30 2024, @11:17PM
So I have only the slightest experience with high-speed photography, but I was once tasked with capturing the image of a projectile passing by at about 6200 m/s. Given a 10 centimeter field of view, the window to snap the photo was about 16 microseconds. We were using a sub-microsecond x-ray source to back-light the object, and the trick was to trigger it at the correct time by predicting the expected behavior of the launcher apparatus. Out of a dozen tries, we only succeeded once.
Additionally we deployed a high-speed motion-picture film camera to try to capture video of the passing projectile. It required a pre-trigger of a few seconds to get the film up to speed (and the first half of the roll of film was wasted getting up to speed). Again dozens of things had to go just right for the thing to work - and ultimately it never did. I did, however get to spend hours carefully inspecting blank film.
Now, if I had had the option of taking - say - 100,000 frames per second of digital photos, success would have been much easier, but only two frames of the 100,000 would have been interesting, and finding those two frames would be a non-trivial task without some AI to help out.
All of which is to say that imaging a short-duration event with a high-speed camera is an intrinsically hard problem, and getting something you can play back successfully can be a bitch.
Finally, to comment specifically on TFA, it sounds like they are arranging to record a sequence of somethings which are not exactly images, but that can be post-processed into a sequence of images - so here again, play-back will be more expensive than the recording. I do not see any information about how many such high-speed frames can be captured in one go, nor the storage requirements for them. I daresay that only PhD students will ever have occasion to play with this contraption.
--
-Jim Croce, Time in a Bottle
(Score: 2) by Rosco P. Coltrane on Saturday March 30 2024, @08:51PM
And a big one too.