Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Thursday March 07 2019, @08:13PM   Printer-friendly
from the can-you-picture-this? dept.

Xiaomi Teams Up with Light for Multi-Module Smartphone Cameras

Xiaomi and Light, a computational imaging firm, have announced at Mobile World Congress that the two companies will be working together to develop new multi-module cameras for smartphones. The two companies promised that the jointly-developed cameras will feature DSLR-level capabilities, but did not disclose when the first product from the joint project is expected to come to fruition.

Light specializes on computational imaging solutions using multiple camera arrays. The company has gone so far as to develop their own chip that can work with 6, 12, or 18-camera arrays. And while Xiaomi and Light aren't specifying just how big of a camera array they're looking to develop, we're likely looking at something in the lower-bounds of those number, if only due to the limited size of smartphones. For reference's sake, a 6-module camera would be very similar to what Nokia has done for their Nokia 9 PureView.

Cover the entire back of a smartphone with cameras, then gingerly hold it using the corners.

Related: Meta-Lens Works in the Visible Spectrum, Sees Smaller Than a Wavelength of Light
A Pocket Camera with Many Eyes - Inside the Development of Light
Caltech Replaces Lenses With Ultra-Thin Optical Phased Array
Nokia (HMD Global) Partners with Zeiss for Optics Capabilities
Google Reportedly Acquires Lytro, Which Made Refocusable Light Field Cameras
LG's V40 Smartphone Could Include Five Cameras
Leaked Image Shows Nokia-Branded Smartphone With Five Rear Cameras


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Thursday March 07 2019, @10:37PM (1 child)

    by Anonymous Coward on Thursday March 07 2019, @10:37PM (#811367)

    I'd like to see software create arrays from cameras on multiple cellphones, more than on a single phone.
    Imagine concert footage that combines the videos from dozens of attendees cameras into a single high-resolution scene.
    Bonus points for also recording depth and recreating the venue in VR.

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 2) by takyon on Thursday March 07 2019, @11:50PM

    by takyon (881) <{takyon} {at} {soylentnews.org}> on Thursday March 07 2019, @11:50PM (#811403) Journal

    That was done years ago:

    http://www.kurzweilai.net/how-to-create-a-seamless-panorama-video-from-multiple-cameras [kurzweilai.net]

    https://petapixel.com/2017/09/23/canons-new-virtual-camera-system-something-straight-sci-fi/ [petapixel.com]

    https://www.nbcnews.com/mach/innovation/virtual-reality-major-sports-are-bringing-stadium-you-n716506 [nbcnews.com]

    There are probably other examples, and more work could be done to make it suitable for VR and usable applications, but it's in the works and definitely feasible. Given the commercial potential, pre-positioned high resolution cameras rather than attendees' cameras will be used in certain instances like sports games.

    You have some smartphones with depth sensors, but is that information ending up in video containers? Because the easiest way to prove this concept would be to use pre-existing videos rather than doing it live. It would be great to save dozens of online videos of some event from varying angles/timestamps, then combine it all years later using a high-end system and machine learning algorithms.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]