Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Sunday November 24 2019, @10:58PM   Printer-friendly
from the If-only-you-could-see-what-I’ve-seen-with-your-eyes dept.

Arthur T Knackerbracket has found the following story:

It's never good when a giant of the technology business describes your product as "a fool's errand".

But that's how Tesla's chief executive Elon Musk branded the laser scanning system Lidar, which is being touted as the best way for autonomous cars to sense their environment.

In April he said Lidar was "expensive" and "unnecessary". He believes that cameras combined with artificial intelligence will be enough to allow cars to roam the streets without a human driver.

Lidar emits laser beams and measures how long they take to bounce back from objects, and this provides so-called point-clouds to draw 3D maps of the surroundings.

These can be analysed by computers to recognise objects as small as a football or as big as a football field and can measure distances very accurately.

Despite Mr Musk, some argue these $10,000 (£7,750) pieces of kit are going to be essential. "For a car to reach anything close to full autonomy it will need Lidar," says Spardha Taneja of Ptolemus Consulting Group, a mobility consultancy.

But why are experts so divided, and how should investors judge this potential gold mine?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Immerman on Monday November 25 2019, @06:50AM (1 child)

    by Immerman (3985) on Monday November 25 2019, @06:50AM (#924429)

    > The only thing it provides is data, and not much differently than any other system.
    The only ting *any* sensor provides is data - but the *kind* of data is very different.
    Lidar, like sonar, directly performs 3-dimensional rangefinding to produce a point cloud. That makes it very hard to "overlook" a nearby obstacle unless the sensor itself has failed.
    In contrast, multi-camera systems collect data in the form of several 2-dimensional color-fields, which are then fed through an image-recognition AI to build a 3D model of the surrounding environment. When it works properly the result is similar, but image recognition and is a complex and poorly understood field of AI research, and there are numerous poorly characterized weaknesses and exploits that can cause things to be misinterpreted, potentially generating a very different point cloud not representative of the actual environment.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Monday November 25 2019, @10:00PM

    by Anonymous Coward on Monday November 25 2019, @10:00PM (#924661)

    > Image recognition and is a complex and poorly understood field of AI research

    Wut.

    First, off, LIDAR isn't "recognition" or AI and neither is generating a point cloud from multiple concurrent (or sequential!) 2d images. There's AI-ey (neural net) methods to do so, but they aren't necessary.

    Second off, calculating a 3d point field from 2d images is late 90s math. Look at SIGGRAPH'99 for lots of examples, since that's when enough computational power was finally around to do it 'live' on 320x240x2 with the toy implementations. Versions using MMX could do that earlier, too. Look at the patents on fixing cameras on a fixed bar with calibrated distance and angles, those are late 90s.