Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.

Submission Preview

Link to Story

The Smallest Large Display Is Projected Straight Onto Your Retina

Accepted submission by exec at 2020-04-15 20:44:52
News

Story automatically generated by StoryBot Version 0.2.2 rel Testing.
Storybot ('Arthur T Knackerbracket') has been converted to Python3

Note: This is the complete story and will need further editing. It may also be covered
by Copyright and thus should be acknowledged and quoted rather than printed in its entirety.

FeedSource: [hackaday]

Time: 2020-04-15 14:09:58 UTC

Original URL: https://hackaday.com/2020/04/15/the-smallest-large-display-is-projected-straight-onto-your-retina/ [hackaday.com] using UTF-8 encoding.

Title: The Smallest Large Display Is Projected Straight Onto Your Retina

--- --- --- --- --- --- --- Entire Story Below --- --- --- --- --- --- ---

The Smallest Large Display Is Projected Straight Onto Your Retina

Arthur T Knackerbracket has found the following story [hackaday.com]:

For most of human history, the way to get custom shapes and colors onto one’s retinas was to draw it on a cave wall, or a piece of parchment, or on paper. Later on, we invented electronic displays and used them for everything from televisions to computers, even toying with displays that gave the illusion of a 3D shape existing in front of us. Yet what if one could just skip this surface and draw directly onto our retinas?

Admittedly, the thought of aiming lasers directly at the layer of cells at the back of our eyeballs — the delicate organs which allow us to see — likely does not give one the same response as you’d have when thinking of sitting in front of a 4K, 27″ gaming display to look at the same content. Yet effectively we’d have the same photons painting the same image on our retinas. And what if it could be an 8K display, cinema-sized. Or maybe have a HUD overlay instead, like in video games?

In many ways, this concept of virtual retinal displays [wikipedia.org] as they are called is almost too much like science-fiction, and yet it’s been the subject of decades of research, with increasingly more sophisticated technologies making it closer to an every day reality. Will we be ditching our displays and TVs for this technology any time soon?

The basic function of the eye is to use its optics to keep the image of what is being looked at in focus. For this it uses a ring of smooth muscle called the ciliary muscle [wikipedia.org] to change the shape of the lens, allowing the eye to change its focal distance, with the iris controlling the amount of light that enters the eye. This enables the eye to focus the incoming image onto the retina [wikipedia.org] so that the area with the most photorecepters (the fovea centralis [wikipedia.org]) is used for the most important thing in the scene (the focus), with the rest of the retina used for our peripheral vision.

The simple question when it comes to projecting an image onto the retina thus becomes: how to do this in a way that plays nicely with the existing optics and focusing algorithms of the eye?

In the naive and simplified model of virtual retinal display technology, three lasers (red, green and blue, for a full-color image) scan across the retina to allow the subject to perceive an image as if its photons came from a real life object. As we have however noted in the previous section, this is not what we’re working with in reality. We cannot directly scan across the retina, as the eye’s lens will diffract the light, a diffraction that changes as the eye adjusts its focal length.

Hitting this part of the retina requires that the subject either consciously focuses on the projected image in order to perceive it clearly, or adjust for the focal distance of the eye at any given time. After all, to the eye all photons are assumed to come from a real-life object, with a specific location and distance. Any issues with this process can result in eyestrain, headaches and worse, as we have seen with tangentially related technologies such as 3D movies in cinemas as well as virtual reality systems.

Most people are probably aware of head-mounted displays, also called ‘smart glasses’. What these do is create a similar effect to what can be accomplished with virtual retinal display technology, in that they display images in front of the subject’s eyes. This is used for applications like augmented (mixed) reality [wikipedia.org], where information and imagery can be super-imposed on a scene.

Google made a bit of a splash a few years back with their Google Glass [wikipedia.org] smart glasses, which use special, half-silvered mirrors [wikipedia.org] to guide the projected image into the subject’s eyes. Like the later Enterprise versions of Google Glass, Microsoft is targeting their HoloLens [wikipedia.org] technology at the professional and education markets, using combiner lenses to project the image on the tinted visor, similarly to how head-up displays (HUDs [wikipedia.org]) in airplanes work.

Magic Leap’s Magic Leap One [ifixit.com] uses waveguides [wikipedia.org] that allow an image to be displayed in front of the eye [virtualrealitypop.com], on different focal planes, akin to the technology used in third generation HUDs. Compared to the more futuristic looking HoloLens, these look more like welding goggles. Both the HoloLens and Magic Leap One are capable of full AR, whereas the Google Glass lends itself more as a basic HUD [theverge.com].

Although smart glasses have their uses, they’re definitely not very stealthy, nor are most of them suitable for outdoor use, especially during sunny weather and hot summer weather. It would be great if one could skip the cumbersome head strap and goggles or visor. This is where virtual retinal displays (VDRs) come into play.

Naturally, the very first question that may come to one’s mind when hearing about VDRs is why it’s suddenly okay to shine not one but three lasers into your eyes? After all, we have been told to never, not even once, point even the equivalent of a low-powered laser pointer at a person, let alone straight at their eyes. Some may remember the 2014 incident at the Burning Man festival [burningman.org] where festival goers practically destroyed the sight of a staff member with handheld lasers.

The answer to these concerns is that very low-powered lasers are used [researchgate.net]. Enough to draw the images, not enough to do more than cause the usual wear and tear from using one’s eyes to perceive the world around us. As the light is projected straight onto the retina, there is no image that can become washed out in bright sunlight. Companies like Bosch have prototypes [ieee.org]of VRD glasses, with the latter recently showing off their BML500P [bosch-sensortec.com] Bosch Smartglasses Light Drive solution. They claim an optical output power of <15 µW.

Bosch’s solution uses RGB lasers with a MEMS mirror [wikipedia.org] to direct the light into the subject’s pupil, and onto the retina. However, one big disadvantage of such a VRD solution is that it cannot just be picked up and used like one can with the previously mentioned smart glasses. As discussed earlier, VRDs need to precisely target the fovea, meaning that a VRD has to be adjusted to each individual user to work or else one will simply see nothing as the laser misses the target.

Much like the Google Glass solution, Bosch’s BML500P is mostly useful for HUD purposes, but over time this solution could be scaled up, with a higher resolution than the BML500P’s 150 line pairs and in a stereo version.

The cost of entry in the AR and smart glasses market at this point is still very steep. While Google Glass Enterprise 2 will set you back a measly $999 or so, HoloLens 2 costs $3,500 (and up), leading some to improvise their own solution [eclecti.cc] using beam splitters dug out of a bargain bin at a local optics shop. Here too the warning of potentially damaging one’s eyes cannot be underestimated. Sending the full brightness of a small (pico)projector essentially straight into one’s eye can cause permanent damage and blindness.

There are also AR approaches that focus on specific applications, such as tabletop gaming with Tilt Five’s solution [hackaday.com]. Taken together, it appears that AR — whether using the beam splitter, projection or VRD approach — still is in a nascent phase. Much like virtual reality (VR) a few years ago, it will take more research and development to come up with something that checks all the boxes for being affordable, robust and reliable.

That said, there definitely is a lot of potential here and I, for one, am looking forward to seeing what comes out of this over the coming years.

I really hate it anytime some display maker uses phrases like “painting image directly on the retina”. They all boil down to a display emitting light and that light being focused by the lens of the eye, just like every other display.

It’s just not possible to create a virtual image from a point light source. You can’t scan a laser across the retina unless you physically move the location of the laser. The Bosch system sounds like a DLP display. It sounds very much like the Avegant Glyph, a display touted with much fanfare and “retinal imaging”, sending the image “directly onto the retina”. Which of course was just a laser shining on a DLP. Which ends up being very small and which you have to look at from a very narrow angle or see nothing.

I really hate it when a commenter spouts nonsense without doing even the most basic reading on what they purport to be an expert on.

The Bosch system doesn’t need to move the laser, and doesn’t need anything like a DLP: It scans the collimated beam across a holographic reflector element embedded in the eyeglass lens. That reflector functions like a curved mirror, presenting to the eye a virtual image located some comfortable distance in front of the lens.

It’s still a HUD-like display, and still relies on the device’s optical power to be at least comparable to the ambient light to be visible.

A mirror array like DLP or galvonometer pairs (i’m sure there are other options too) does effectively move the light source so you can use them to scan a laser point across whatever you want to (which could be directly onto a retina). So it can be very accurately called painting an image on the retina – its not a full image/ object reflection being brought into focus by the eye. And in this case yes it still passes though the lens of the eye and this must be taken into account but you could theoretically continue to ‘paint’ in focus images no matter what the eye is focused on. So much as you hate it this is pretty accurate marketing speak – better than applied to many other products (at least when applied to projections straight into the eye – if you are seeing it on project to screen or normal displays then I agree with you).

As for the limitations of such technology currently I’d describe them as significant. But meaningful leaps could well be not that far into the future.

I’ll stick with ‘half’ silvered lenses and traditional screens myself at least for the foreseeable future. That is a safe, reasonably priced and ‘easy’ route to AR/HUD type things and having a fixed focal length to the projection isn’t the end of the world.

No, a DLP does not do any sort of “move the light source”. It just gates individual pixels on or off. The DLP array is a 1:1 map of the output pixels (except for the exotic fourier-based configurations).

Galvanometers, yes, they steer the beam and, yes, can correctly be said to be “painting the retina”.

And no, you can’t “continue to ‘paint’ in focus images no matter what the eye is focused on.” The lens of the eye is still in the optical path and still does the focusing on the retina. So the manufacturers of such systems must take into account myopia, presbyopia, etc., and even astigmatism (which AFAIK none do, not even Focals by North, but I’d be delighted to learn otherwise because then I’d be lining up for them.)

“Will we be ditching our displays and TVs for this technology any time soon?”

Skip step two, and go straight for the optic nerve. The ultimate V/A/R.

I don’t know if injecting into the optic nerve would be the right place, even if that is technically feasible compared to more convenient locations. As [Maya] mentions, there’s a heck of a lot of preprocessing that goes on right in the Mk. 1 eyeball retina before getting data fired down that fat pipe to the back of the head. The visual cortex there expects to receive that preprocessed compressed and formatted data — a bespoke format it learned over years when it was still plastic. Any synthetic visual information injected in that path would need to replicate that weird, organic preprocessing that’s likely unique to each Mk 1 eyeball.

This site uses Akismet to reduce spam. Learn how your comment data is processed [akismet.com].

                                                By using our website and services, you expressly agree to the placement of our performance, functionality and advertising cookies. Learn more [hackaday.io]

-- submitted from IRC


Original Submission