Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday November 24 2019, @10:58PM   Printer-friendly
from the If-only-you-could-see-what-I’ve-seen-with-your-eyes dept.

Arthur T Knackerbracket has found the following story:

It's never good when a giant of the technology business describes your product as "a fool's errand".

But that's how Tesla's chief executive Elon Musk branded the laser scanning system Lidar, which is being touted as the best way for autonomous cars to sense their environment.

In April he said Lidar was "expensive" and "unnecessary". He believes that cameras combined with artificial intelligence will be enough to allow cars to roam the streets without a human driver.

Lidar emits laser beams and measures how long they take to bounce back from objects, and this provides so-called point-clouds to draw 3D maps of the surroundings.

These can be analysed by computers to recognise objects as small as a football or as big as a football field and can measure distances very accurately.

Despite Mr Musk, some argue these $10,000 (£7,750) pieces of kit are going to be essential. "For a car to reach anything close to full autonomy it will need Lidar," says Spardha Taneja of Ptolemus Consulting Group, a mobility consultancy.

But why are experts so divided, and how should investors judge this potential gold mine?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Disagree) by edIII on Monday November 25 2019, @12:18AM (14 children)

    by edIII (791) on Monday November 25 2019, @12:18AM (#924340)

    LIDAR will remain with us for a long time. The only thing it provides is data, and not much differently than any other system. At least from the perspective of AI, it's two datasets that it can train on and be able to extract out useful information from it. Like objects, collision courses, etc. It's effective, and not unreasonably expensive, and can get cheaper.

    Musk is a moron. He's arguing that LIDAR is kind like ears, and that ears are inferior to vision or some bullshit. For safety, autonomous anythings should have a highly redundant and reliable sensor package. Like a human being. Two eyes, two ears, 10 fingers, 2 hands, an entire body of skin. For sensing our environment we have a ton of data, and more data the better. When we lose our eyes, our ears and hands/fingers can take over providing information about our environment. In most ideal cases you lose a lot of function, but are still retaining some.

    So LIDAR can be "blinded" under some conditions, and is objectively inferior to other sensor technologies in others. Cameras are still objectively inferior to LIDAR in extremely poor lighting conditions, and both systems are subject to failure if their wavelengths are saturated or sabotaged. What they do well is compliment one another.

    The combination of both systems, along with as many others as reasonably possible, is what will create the sensor system we need. Anybody hawking a single sensor type as the solution for anything is most likely just trying to sell you a product.

    Autonomous systems are much better off when they have a half-dozen reliable ways to sense an object in front of them.

    --
    Technically, lunchtime is at any moment. It's just a wave function.
    Starting Score:    1  point
    Moderation   +1  
       Underrated=1, Disagree=1, Total=2
    Extra 'Disagree' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by RamiK on Monday November 25 2019, @12:53AM

    by RamiK (1813) on Monday November 25 2019, @12:53AM (#924347)

    The real problem with LIDAR research that I suspect Musk has in mind is that it's likely not monetizable for consumer products: Once it goes solid state and become usable, every other arms manufacturing country will reverse it or invalidate the patent for its missiles and drones.

    And seeing how you'd still need cameras to tell street signs and lights and whatnot, it's probably better focusing on cameras and AI where a service model can work out for years to come.

    --
    compiling...
  • (Score: 2) by FatPhil on Monday November 25 2019, @01:13AM

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Monday November 25 2019, @01:13AM (#924350) Homepage
    Yeah, I concur, Musk is just pushing the thing that he's financially invested in, not much scandal there.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
  • (Score: 1) by fustakrakich on Monday November 25 2019, @01:22AM (5 children)

    by fustakrakich (6150) on Monday November 25 2019, @01:22AM (#924354) Journal

    No, I think he was arguing that the thing is a big ugly gun turret with heavy but fragile machinery making a lot of monkey motion inside sitting on your roof, and it is. We have to go solid state with this stuff. The only moving parts should be the wheels.

    --
    La politica e i criminali sono la stessa cosa..
    • (Score: 2) by JoeMerchant on Monday November 25 2019, @03:44AM (4 children)

      by JoeMerchant (3937) on Monday November 25 2019, @03:44AM (#924388)
      • (Score: 1) by fustakrakich on Monday November 25 2019, @04:04AM (3 children)

        by fustakrakich (6150) on Monday November 25 2019, @04:04AM (#924392) Journal

        MEMS (micro-electro-mechanical system) mirrors. And with the lasers, there's still too much activity. But for now, what the hell, if the thing runs for a long time without any problems, go with it. I still prefer more passive, inherently more robust sensors. A "camera" can be set up for any frequency, not just visible light. Put a bunch of them on the vehicle, and the computer can triangulate pretty good

        --
        La politica e i criminali sono la stessa cosa..
        • (Score: 2) by edIII on Monday November 25 2019, @09:20PM (2 children)

          by edIII (791) on Monday November 25 2019, @09:20PM (#924641)

          Pretty good? Would you fly in an autonomous vehicle that had "pretty good" spatial recognition?

          The computer has to triangulate through some pretty complex math, but much worse than that, it's through the assumption that the pixels represent an object. I know AI is getting pretty dang good, but there is NO substitute for the accuracy of LIDAR at the moment. There is a very good guess from the cameras, and definitive understanding of location from LIDAR. You can FOOL cameras, but to my knowledge, you can't fool LIDAR.

          My point is to never fanboy over one sensor type, but to combine all of the data. That way it can operate with limited function when only LIDAR data is available, and likewise when only camera data is available. Those aren't the only two sensor types either.

          --
          Technically, lunchtime is at any moment. It's just a wave function.
          • (Score: 1) by fustakrakich on Monday November 25 2019, @09:44PM (1 child)

            by fustakrakich (6150) on Monday November 25 2019, @09:44PM (#924652) Journal

            Not a "fanboy" of anything. And a centimeter is 'pretty good'. The only real requirement is that it be solid state. It's perfectly fine to use tiny mirrors if they can hold up, for now. I still prefer that something with no moving parts be developed for the long run.

            You can FOOL cameras, but to my knowledge, you can't fool LIDAR.

            I think spray paint or mud on the glass works for both. That's why low frequency sensors should also be used alongside. Like I said previously above, just use multiple sensors of various types. With lots of them you can be more fault tolerant, and plenty accurate.

            --
            La politica e i criminali sono la stessa cosa..
            • (Score: 2) by kazzie on Tuesday November 26 2019, @05:14AM

              by kazzie (5309) Subscriber Badge on Tuesday November 26 2019, @05:14AM (#924791)

              So, something like compound eyes? That'l look alright on a Beetle.

  • (Score: 3, Interesting) by JoeMerchant on Monday November 25 2019, @03:36AM (2 children)

    by JoeMerchant (3937) on Monday November 25 2019, @03:36AM (#924386)

    Cameras are still objectively inferior to LIDAR in extremely poor lighting conditions

    Cool thing they came up with for automobiles back in the 1900s: headlights.

    I've often wondered if computer vision systems for cars could isolate an IR band for the cameras and establish some kind of TDM with oncoming cars so both could use super bright high beams, just each of them only being on for something like a 25% duty cycle, with phase controlled by direction of travel.

    --
    🌻🌻 [google.com]
    • (Score: 2) by edIII on Monday November 25 2019, @09:27PM (1 child)

      by edIII (791) on Monday November 25 2019, @09:27PM (#924643)

      So what? I'm talking about recovering from, or being more reliable in, adverse conditions. Cameras could be setup for multiple spectrums, including infrared. Heck, you could hook it up with night vision and IR headlights and drift silently though the night :)

      However, cameras are not fool proof, and they're still fundamentally making assumptions. LIDARS don't make assumptions. The laser is informing us that there is absolutely no doubt that something reflected the laser light back to us, and it was exactly X distance from the collector at Y time. That's superior data all day long.

      Pick any sensor type and we can find limitations with it. BTW, headlights can actually be the problem too, and not the solution. If the camera's spectrum is completely saturated (snow blind), then no assumptions are being made anymore about the pixels, and spatial information is no longer being derived by the AI. This is where a different sensor type that explicitly operates within the "blind spots" of other sensors can help.

      In the end you guys are still arguing for just eyes, ears, or noses. Why not all of the above?

      --
      Technically, lunchtime is at any moment. It's just a wave function.
      • (Score: 2) by JoeMerchant on Tuesday November 26 2019, @02:30AM

        by JoeMerchant (3937) on Tuesday November 26 2019, @02:30AM (#924757)

        LIDARS don't make assumptions

        No, people make assumptions. Assumptions like: the laser has free space to travel through and will not be obscured by dust, bugs, rain, sleet or snow; backscatter from the other 200 self driving cars on the nearby road will not confound the time of flight calculations of MY LIDAR, and hoodlums with spray-glitter won't hit my car with it.

        Optical camera users make lots of similar assumptions, and sonar - whoa Nellie, assumption city.

        That's superior data all day long.

        No, that's active illumination, and it works until it doesn't.

        If the camera's spectrum is completely saturated (snow blind)

        Then you need to spend an extra $0.50 on the lens setup and put in some kind of iris, or polarizing light damper, or whatever other cheap and reliable light amplitude reduction trick has been invented since the first pinhole camera.

        you guys are still arguing for just eyes, ears, or noses. Why not all of the above?

        Not so much arguing, just pointing out that multiple sensors are better than single points of failure, and if your sensor budget is $10K, do you blow it all on a single lidar or 100 fixed optical cameras scattered all over the vehicle? Of course, if you've got $20K then it makes even more sense not to just blow it on 2 LIDAR.

        Why not drive by ears, or noses? Nope, not even starting a rebuttal for that.

        --
        🌻🌻 [google.com]
  • (Score: 2) by Immerman on Monday November 25 2019, @06:50AM (1 child)

    by Immerman (3985) on Monday November 25 2019, @06:50AM (#924429)

    > The only thing it provides is data, and not much differently than any other system.
    The only ting *any* sensor provides is data - but the *kind* of data is very different.
    Lidar, like sonar, directly performs 3-dimensional rangefinding to produce a point cloud. That makes it very hard to "overlook" a nearby obstacle unless the sensor itself has failed.
    In contrast, multi-camera systems collect data in the form of several 2-dimensional color-fields, which are then fed through an image-recognition AI to build a 3D model of the surrounding environment. When it works properly the result is similar, but image recognition and is a complex and poorly understood field of AI research, and there are numerous poorly characterized weaknesses and exploits that can cause things to be misinterpreted, potentially generating a very different point cloud not representative of the actual environment.

    • (Score: 0) by Anonymous Coward on Monday November 25 2019, @10:00PM

      by Anonymous Coward on Monday November 25 2019, @10:00PM (#924661)

      > Image recognition and is a complex and poorly understood field of AI research

      Wut.

      First, off, LIDAR isn't "recognition" or AI and neither is generating a point cloud from multiple concurrent (or sequential!) 2d images. There's AI-ey (neural net) methods to do so, but they aren't necessary.

      Second off, calculating a 3d point field from 2d images is late 90s math. Look at SIGGRAPH'99 for lots of examples, since that's when enough computational power was finally around to do it 'live' on 320x240x2 with the toy implementations. Versions using MMX could do that earlier, too. Look at the patents on fixing cameras on a fixed bar with calibrated distance and angles, those are late 90s.

  • (Score: 2) by Runaway1956 on Monday November 25 2019, @10:47AM

    by Runaway1956 (2926) Subscriber Badge on Monday November 25 2019, @10:47AM (#924468) Journal

    Agreed. Almost all animals have obvious multiple senses. Some senses are better developed in some animals, of course, and animals such as snakes use their senses in a manner that seems alien to humans. But I'm not aware of any animals without multiple senses. Quite possibly animal life on the microscopic scale use multiple senses.

    It may be possible to overwhelm a system with too much data, but cameras and lidar aren't going to overwhelm any but the cheapest of computers.

    Musk is being stupid here.