Yes, the phrase used in the headline is a direct quote. Tesla CEO Elon Musk is teasing new details about the company's future, set to be announced later this week. The news may be in reaction to slipping stock prices and troubles with regulators following a recent crash:
While offering no other details, the master plan is likely a follow-up to a 2006 blog post titled "The Secret Tesla Motors Master Plan (just between you and me)," in which Musk laid out his vision for Tesla, including eventual plans for the Tesla Roadster, the Model S sedan and the upcoming (and more affordable) Model 3 sedan.
It may not be a bad idea for Musk to roll out some optimistic news. In recent weeks, the electric car company has become the subject of a federal safety investigation following at least two crashes — one fatal — possibly related to its highly touted autopilot feature; Tesla has announced a drop in Model S shipments; and Musk himself has come under fire after proposing that Tesla purchase SolarCity, which he is also the chairman of, much to the chagrin of shareholders.
[...] Tesla shares are down almost 10% year-to-date, and down more than 16% in the past 12 months.
You may also be interested in this NYT editorial about "Lessons From the Tesla Crash".
(Score: 1, Insightful) by Anonymous Coward on Tuesday July 12 2016, @09:53AM
My concern is the instrumentation is substantially inferior to a human. For instance, the forward looking camera in the Tesla is HD resolution (1280x720) with a wide angle lens that has a 5.6 arc-min resolution per pixel, compared to a human which has 1 arc-min.
Human vision is sharp only in a very narrow angle of view. Scene awareness is (arguably badly) established through more-or-less constant scanning of directions of interest, which are chosen by mind (experience). Instead of wide angle lens, they should use scanning cameras on gimbals.
(Score: 2) by Scruffy Beard 2 on Tuesday July 12 2016, @02:50PM
It may still be cheaper and more reliable to just push up the resolution and processing. It probably comes down to power budget.
(Score: 1) by tftp on Wednesday July 13 2016, @12:33AM
Scene awareness is (arguably badly) established through more-or-less constant scanning of directions of interest, which are chosen by mind (experience). Instead of wide angle lens, they should use scanning cameras on gimbals.
It's much easier to process a multi-megapixel scene at 60 fps than to gain scene awareness. The latter requires intelligence. I read that one of tough problems for self-driving cars is to distinguish between an empty plastic bag that gently drfts across the road and a heavy concrete block that just lies in wait for you. Just like that metal piece that skewered one of Teslas a couple years ago (no autopilot was involved then.)
Scene awareness is also not something that humans are born with. It takes a lot of training to scan correctly, even though humans can easily classify objects and sort them out by the threat they represent. Can a computer tell the difference between a squirrel and a child, for example? That is an important difference! Can the camera realize that the shape on the road ahead is just a shadow? Or that it is NOT a shadow?