Researchers from the Institute for Quantum Optics and Quantum Information (IQOQI), the Vienna Center for Quantum Science and Technology (VCQ), and the University of Vienna
have developed a new quantum imaging technique in which the image has been obtained without ever detecting the light that was used to illuminate the imaged object.
Their sketch of a cat was generated with photons that have never touched the object, instead using entangled pairs of photons and discarding the photons that have interacted with the sketch. The researchers are confident that their new imaging concept is very versatile and could even find applications where low light imaging is crucial, in fields such as biological or medical imaging.
This discussion has been archived.
No new comments can be posted.
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(Score: 2, Informative) by Anonymous Coward on Friday December 19 2014, @11:27AM
The scientific article can be found on arXiv. [arxiv.org] Frankly, the popular science article linked in the summary even left me as someone working in the field puzzled; the scientific article of course cleared that confusion. Maybe someone finds the time to make a better layman-readable summary of it.
(Score: 4, Interesting) by pkrasimirov on Friday December 19 2014, @11:38AM
> This alludes to the famous Schrödinger cat paradox, in which a cat inside a closed box is said to be simultaneously dead and alive as long there is no information outside the box to rule out one option over the other. [...] The object (e.g. the contour of a cat) is illuminated with light that remains undetected. Moreover, the light that forms an image of the cat on the camera never interacts with it.
I don't pretend to understand quantum physics but isn't that light already defined the cat state, no matter if it went to the camera or in the environment around the camera? Isn't the cat in informational superstatus (relative to the observer) only if its entire info-universe separated from the observer info-universe? Maybe I got it all wrong in first place. But to me this entanged particles observation sounds like they can now observe Medusa, not Schrödinger's cat.
(Score: 0) by Anonymous Coward on Friday December 19 2014, @01:18PM
Well, their cat image is purely classical (no cat superposition), it is only the light whose quantum properties are used.
(Score: 2) by dlb on Friday December 19 2014, @02:15PM
but isn't that light already defined the cat state, no matter if it went to the camera or in the environment around the camera?
You've hit the nail on the head, here. I don't understand quantum mechanics, either, but isn't one of the tenets that you cannot get information about the state of an object without changing that object's state in the process? Something doesn't sound right in the summary. (Disclaimer: not only do I not understand Q.M., but I didn't read the article. With apologies....)
(Score: 3, Informative) by strattitarius on Friday December 19 2014, @07:39PM
http://www.nature.com/nature/journal/v512/n7515/full/nature13586.html [nature.com]
Slashdot Beta Sucks. Soylent Alpha Rules. News at 11.
(Score: 2) by The Mighty Buzzard on Friday December 19 2014, @11:57AM
Seems to me that it's not really so much an image of the object as a silhouette of it. Granted, that's still pretty cool to be able to do mechanically but it's going to be 2d and monochrome as are all silhouettes. Using this current setup at least. I can see doing the same thing by either rotating the object or the camera to produce 3d silhouettes pretty easily though and that sounds a lot more useful.
The part I'm missing is why you would want to specifically exclude the light that would make something visible unless you're trying to image something in a quantum superposition state. What's the actual up side of this technique, aside from being scientifically nifty?
My rights don't end where your fear begins.
(Score: 0) by Anonymous Coward on Friday December 19 2014, @12:49PM
i don't have time to read the article, so take this with a grain of "what the hell are you talking about".
they're saying the image is created not by photons interacting with the object, but by photons entangled with those photons. as a matter of principle, we could for instance entangle a bunch of photons, send some of them over to the moon to interact with some small teeny object, and use the photons that we kept on Earth to generate a picture of the small teeny object.
I'll leave it to the person who read the arXiv paper to correct me if this is nonsense.
(Score: 0) by Anonymous Coward on Friday December 19 2014, @01:35PM
You could of course sent your photons to the moon, but you'd then have to have the photons to return to earth, so that you can then on earth erase the information on whether the photons actually went to moon. Without erasing that information, you'll not get the image. So no remote sensing without return of light, sorry.
(Score: 0) by Anonymous Coward on Friday December 19 2014, @01:42PM
oh. dam. there's always a catch.
(Score: 0) by Anonymous Coward on Friday December 19 2014, @01:25PM
It is indeed a look-through image which they used (sort of a slide). Note that look-through is also what you get for example in a light microscope.
The point is that often the light frequency you'd use to see the features is not the same light frequency you'd have a good camera for. So you illuminate the object with one light frequency, but have the camera see the image in another light frequency. The setup effectively transfers the image from one light frequency (infrared in the experiment) to another frequency (red in the experiment).
(Score: 4, Funny) by wonkey_monkey on Friday December 19 2014, @04:37PM
This is clearly a brilliant piece of science.
I have absolutely no idea what's going on.
systemd is Roko's Basilisk