Slash Boxes

SoylentNews is people

posted by hubie on Tuesday May 14, @08:55AM   Printer-friendly
from the Optics-Count dept.

We all know what the problem is with the current crop of AR "glasses": bulky devices which cause you to discover all kinds of new and unknown neck muscles the longer you wear them.

But ho-ho-ho. A team at Stanford University, Hong Kong University and NVIDIA now have worked something out, massively leaning on machine learning and scifi stuff called -- how unoriginal -- optical metasurfaces. Metasurfaces, in case you didn't know, are components engineered to bend light in unusual ways. Research on metasurfaces and other metamaterials has led to invisibility cloaks that can hide objects from light, sound, heat, and other types of waves, among other discoveries. Whoa.

In the boffins own words, they created

a unique combination of inverse-designed full-colour metasurface gratings, a compact dispersion-compensating waveguide geometry and artificial-intelligence-driven holography algorithms. These elements are co-designed to eliminate the need for bulky collimation optics between the spatial light modulator and the waveguide and to present vibrant, full-colour, 3D AR content in a compact device form factor. To deliver unprecedented visual quality with our prototype, we develop an innovative image formation model that combines a physically accurate waveguide model with learned components that are automatically calibrated using camera feedback. Our unique co-design of a nanophotonic metasurface waveguide and artificial-intelligence-driven holographic algorithms represents a significant advancement in creating visually compelling 3D AR experiences in a compact wearable device.

Again: whoa -- them scientists created AR glasses as thin as glasses.

There are still a few minor problems though -- the usable field of view is rather limited, for example -- but I'll leave it to the engineering types around here to pick the team's Nature article apart. For all the others, there's a slightly more readable IEEE Spectrum article to enjoy.

Me, I'm going to brush up on Doom III.

Original Submission

This discussion was created by hubie (1068) for logged-in users only. Log in and try again!
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by looorg on Tuesday May 14, @10:21AM (2 children)

    by looorg (578) on Tuesday May 14, @10:21AM (#1356919)

    As I normally wear glasses I guess this could be a good thing. I just can't really see a need for what I would like to have included on the displays and how my reality would improve by some visual augmentation(s). Without being a distraction and eventually annoying. Perhaps some kind of picture in picture would be interesting some times, but it would take some getting used to.

    It's already annoying when people keep fidgeting with their phones like addicts. To have people stare into the void of nothingness is probably equally disturbing. Or people getting so distracted by their AR that they eventually just walk into people, objects or traffic.

    • (Score: 2) by darkfeline on Wednesday May 15, @09:00AM

      by darkfeline (1030) on Wednesday May 15, @09:00AM (#1357013) Homepage

      If the AI demoed in at Google IO comes to fruition, one universally useful functionality would be simply warning you if there's any danger you may not have noticed, e.g., the driver in the incoming car is distracted. Passive 99% of the time, life saving during that single moment in an entire lifespan where your guardian angel is taking a break.

      Join the SDF Public Access UNIX System today!
    • (Score: 2) by quietus on Wednesday May 15, @03:47PM

      by quietus (6328) on Wednesday May 15, @03:47PM (#1357052) Journal

      Don't underestimate the power of gaming and porn to lure users.

      For professional purposes though -- looorg reminded me with his FLIR comment -- I can think of at least one application: fire services, with this AR system built into the vizor of their helmets. Really useful to show team position on a floor plan, rooms checked, escape paths and critical conditions (i.e. sensors detecting that a fire is raging, unseen, in the overhead ceiling).

      Also, there's one company which survived from the VRML days at the end of the '90s: ParallelGraphics, now renamed Cortona3D [] (the original name of their viewer). They specialize in 3D construction instructions/documentation. That could also be an application (sure could come in handy for some IKEA constructions, I read).

  • (Score: 3, Informative) by stormreaver on Tuesday May 14, @12:41PM (1 child)

    by stormreaver (5101) on Tuesday May 14, @12:41PM (#1356922)

    This is one small step in the right direction for AR, but is ultimately a non-starter for most people without a tiny power supply with HUGE energy density. Very few people will wear the current iteration (it's still too bulky and ugly to walk around with), and even fewer will walk around with a large wired power supply strapped to their body. This could be somewhat useful for controlled environments where looks don't matter, though.

    • (Score: 3, Interesting) by looorg on Tuesday May 14, @01:27PM

      by looorg (578) on Tuesday May 14, @01:27PM (#1356926)

      Having looked thru both links it's really hard to tell. I can't find any mention of the power consumption of these AR glasses/goggles. Since it's currently I gather in a sort of lab setting perhaps it doesn't matter. But as noted I doubt anyone will want to be lugging around a car battery (in size and weight) in a backpack to power them. NVIDIA isn't know for their low low power specs.

      We capture calibration data for our artificial-intelligence-based wave propagation model and also capture results of using a FLIR Grasshopper3 12.3?MP colour USB3 sensor through a Canon EF 35?mm lens with an Arduino controlling the focus of the lens.

      A FLIR camera could be kind cool to have in the glasses.

      Our prototype AR display combines the fabricated metasurface waveguide with a HOLOEYE LETO-3 phase-only SLM. This SLM has a resolution of 1080×1920 pixels with a pitch of 6.4µm.

      The full-colour 3D results shown in Fig. 4b validate the high image quality our system achieves for both in- and out-of-focus regions of the presented digital content.

      All holograms are computed using a gradient descent computer-generated holography (CGH) algorithm ...

      So they want 1080x1920 full-color 3D holograms being projected onto the lens. Am I going to get a tan or burn marks from all the heat or will the display come with it's own internal cooling fans and such? Also there seems to be a big need for one the spot computations. Are they included in the goggles or are those "in the cloud". Cause I don't want latency in my display. That will be horrific in so many ways.

      Currently the new AR display can overlay images only across a narrow field of view. Whereas each human eye can supply a roughly 130-degree field of vision, and both eyes working together can provide a nearly 180-degree field of vision when looking forward, the new device can display images only over a roughly 12-degree arc in front of the viewer.

      I wonder which 12 degress of my FOV I'm losing to this display. Will it be front and center or will it be out in the periphery?

  • (Score: 0) by Anonymous Coward on Wednesday May 15, @03:00AM

    by Anonymous Coward on Wednesday May 15, @03:00AM (#1356992)

    There are probably lots of patents out there on such stuff even though the "inventors" couldn't and didn't actually build the stuff. They just wrote or copied some sci-fi and managed to get patents on it.

    If you don't believe me check this out: [] []

    [Object] To provide a contact lens and storage medium capable of controlling an image pickup unit provided in the contact lens.
    [Solution] Provided is a contact lens including: a lens unit configured to be worn on an eyeball; an image pickup unit configured to capture an image of a subject, the image pickup unit being provided in the lens unit; and an image pickup control unit configured to control the image pickup unit.

    Who here believes Sony actually built that in/before 2016?