Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday February 24 2015, @06:17AM   Printer-friendly [Skip to comment(s)]
from the coming-soon-'robot-races' dept.

The racetrack is the ultimate test of driving skill, managing power, traction, and braking to produce the fastest times. Now BBC reports that engineers at Stanford University have raced their souped-up Audi TTS dubbed ‘Shelley’ on the racetrack at speeds above 120 mph. When they time tested it against David Vodden, the racetrack CEO and amateur touring class champion, the driverless race car was faster by 0.4 of a second. "We’ve been trying to develop cars that perform like the very best human drivers,” says Professor Chris Gerdes who tested Shelley at Thunderhill Raceway Park in Northern California. “We’ve got the point of being fairly comparable to an expert driver in terms of our ability to drive around the track.”

To get the cars up to speed, the Stanford team studied drivers, even attaching electrodes to their heads to monitor brain activity in the hope of learning which neural circuits are working during difficult manoeuvres. Scientists were intrigued to find that during the most complex tasks, the experts used less brain power. They appeared to be acting on instinct and muscle memory rather than using judgement as a computer program would. Although there was previously very little difference between the path a professional driver takes around the course and the route charted by Shelley's algorithms until now the very best human drivers were still faster around the track, if just by a few seconds. Now the researchers predict that within the next 15 years, cars will drive with the skill of Michael Schumacher. What remains to be seen is how Shelly will do when running fender to fender with real human race drivers.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Interesting) by kaszz on Tuesday February 24 2015, @06:24AM

    by kaszz (4211) on Tuesday February 24 2015, @06:24AM (#148962) Journal

    The problem isn't speed, but reliability. Enjoying a 90 km/h ride and having the vehicle computer suddenly decide to go abruptly to the side because of some corner case isn't nice. That's where humans comes to play. They have judgment, computers don't.
    Every attempt to make computer programming into an linear art will fail. Because it's nature isn't that way..!

    • (Score: 2) by tibman on Tuesday February 24 2015, @06:32AM

      by tibman (134) Subscriber Badge on Tuesday February 24 2015, @06:32AM (#148968)

      The fine summary says that using less judgement in a complex (corner?) case is what humans do. Why can't a computer do the same? Yes, you may abruptly fly into a wall. But if that's what the human driver was going to do too then i think the system is reliable enough, lol. Obviously the computer gets to learn from the incident where the normal driver would likely not.

      --
      SN won't survive on lurkers alone. Write comments.
      • (Score: 4, Insightful) by kaszz on Tuesday February 24 2015, @06:40AM

        by kaszz (4211) on Tuesday February 24 2015, @06:40AM (#148971) Journal

        Measuring the thought process of humans is a very imprecise art. The human brain as tons of layers of control algorithms that science has yet to manage.

        • (Score: 2) by frojack on Tuesday February 24 2015, @07:37AM

          by frojack (1554) Subscriber Badge on Tuesday February 24 2015, @07:37AM (#148984) Journal

          To this, you have to add that little deek, that some drivers will use to prevent other drivers from passing, or if that isn't enough that little bump in the back stretch.

          There is a lot more going on than just getting around the track.

          --
          No, you are mistaken. I've always had this sig.
    • (Score: 3, Insightful) by Anonymous Coward on Tuesday February 24 2015, @07:46AM

      by Anonymous Coward on Tuesday February 24 2015, @07:46AM (#148989)

      Situations in which a driverless car can't cope have declined and will continue to decline dramatically before their mass market use. In the meantime, humans will continue to fall asleep behind the wheel, will become cognitively "bored" during long trips, drive drunk or high, have wildly varying levels of driving ability, will continue to drive while losing vision and cognitive ability to aging, experience cardiac arrest or seizures, will be distracted by phones and passengers, and have the same comparatively poor reaction times (hundreds of milliseconds).

      These systems have been driven hundreds of thousands of miles, exposed to varying real world conditions, and are being developed by many competing companies and universities. Even if eventual commercial models fail an edge case, they will save many more lives than they will destroy. Driverless cars have the reliability edge, not humans.

      • (Score: 2) by c0lo on Tuesday February 24 2015, @02:17PM

        by c0lo (156) Subscriber Badge on Tuesday February 24 2015, @02:17PM (#149084) Journal

        Situations in which a driverless car can't cope have declined and will continue to decline dramatically before their mass market use.

        So, your estimate, please: how many deaths away from the point in which driverless car get approved on public roads?

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0
        • (Score: 2) by quacking duck on Tuesday February 24 2015, @03:01PM

          by quacking duck (1395) on Tuesday February 24 2015, @03:01PM (#149115)

          So, your estimate, please: how many deaths away from the point in which driverless car get approved on public roads?

          Millions more.

          I'm assuming you're talking about deaths due to human drivers, of course.

          And the approval process will be a highly emotional, illogical political fight. Humans as a group seem incapable of higher reasoning that lets them objectively evaluate evidence, odds, and risk/rewards objectively. Witness the anti-vaxers, blind supporters of any political party, and people who keep gambling and buying lottery tickets.

        • (Score: 2) by gnuman on Tuesday February 24 2015, @05:12PM

          by gnuman (5013) on Tuesday February 24 2015, @05:12PM (#149189)

          So, your estimate, please: how many deaths away from the point in which driverless car get approved on public roads?

          You may want to see that approximately 1,240,000 people died on the roads last year.

          http://en.wikipedia.org/wiki/List_of_countries_by_traffic-related_death_rate [wikipedia.org]

          36,000 in USA alone. It's like a 9/11 every month on the roads in the USA, but I guess that's "normal" so no one cares. Instead, everyone just engages into stupid comments how we should be driving faster, presumably because we don't kill each other fast enough? The acceptability of this as "normal" reminds me of a quote from Joker in Dark Night.

          http://www.imdb.com/character/ch0000180/quotes [imdb.com]

          Joker: I just did what I do best. I took your little plan and I turned it on itself. Look what I did to this city with a few drums of gas and a couple of bullets. Hmmm? You know... You know what I've noticed? Nobody panics when things go "according to plan." Even if the plan is horrifying! If, tomorrow, I tell the press that, like, a gang banger will get shot, or a truckload of soldiers will be blown up, nobody panics, because it's all "part of the plan". But when I say that one little old mayor will die, well then everyone loses their minds!

          So, people killing themselves on roads by millions, and that's OK. That's "normal" somehow. But if software would glitch and 100 people die on the roads a year because of software problems (before these edge cases can be fixed), then indeed that would no longer be acceptable??

          In US, you have a 1% chance of getting killed on the roads in your life. That's more than any non-medical cause combined. You should not be worried about getting killed by guns in US, or by terrorists in Pakistan. You should be worried about getting killed on the roads in either country.

    • (Score: 2) by c0lo on Tuesday February 24 2015, @02:14PM

      by c0lo (156) Subscriber Badge on Tuesday February 24 2015, @02:14PM (#149080) Journal

      Enjoying a 90 km/h ride and having the vehicle computer suddenly decide to go abruptly to the side because of some corner case isn't nice. That's where humans comes to play.

      You mean humans got a taste for playing in driverless care braking/veering suddenly?
      If so, their play won't last long enough to make an enjoyable experience.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0
    • (Score: 2) by darkfeline on Tuesday February 24 2015, @06:58PM

      by darkfeline (1030) on Tuesday February 24 2015, @06:58PM (#149223) Homepage

      No, computers have judgement just like people, it's just a different kind of judgement.

      Let me begin by demonstrating a point with facial recognition. Computers now have gotten pretty good at facial recognition, but they still make mistakes, labeling some faces as non-faces and some non-faces and faces. Humans, hey, they must be a lot better at facial recognition, right? Except people see faces in non-faces all the time (face in toast, face in mirror, face on side of mountain, face in clouds, etc.), and I'd imagine there are times when people fail to recognize faces as well (ignoring special cases like prosopagnosia).

      Humans are biased; they think only human judgement counts as judgement, or that human judgement is somehow more infallible than machine judgement, or animal judgement, or scientific judgement, or statistical judgement, etc. But that's not true; in some cases it may be that human judgement is superior, but in most cases it's not. We humans are royally bad at judging; it doesn't help that most, if not all, of our decisions are made spontaneously and only justified logically after the fact.

      A computer may make an error in driving judgement: it detects a person where there is none, swerving and putting their rider at risk. A human also makes errors in driving judgement: rubbernecking at accidents, new construction, new road signs, a hot woman, misjudging safe driving speed or current driving conditions, misjudging distances, misjudging other driver behavior.

      Now let us ask, which makes more errors resulting in more net cost (damage)?

      --
      Join the SDF Public Access UNIX System today!
  • (Score: 2) by TheLink on Tuesday February 24 2015, @07:44AM

    by TheLink (332) on Tuesday February 24 2015, @07:44AM (#148987) Journal
    I figure that experts use less brain power than novice humans because they have a more accurate model of the context/world/universe in question AND they have trained/prepared a fairly complete set of preprogrammed/memorized actions relevant to that context (I think it's more like model/simulation + lookup table, than just a lookup table).

    As long as the relevant part of the world outside is similar enough to their model "inside" they can select from preprogrammed actions rather than think of many new ones, they don't need to use much brain power.

    But if the world behaves in a completely unexpected way, the preprogrammed actions are no longer valid then they need to think of new actions. Same if the universe behaves in an expected way, but they have not prepared anything for that scenario.

    The model of the "universe" could include the driver's own actions for the desired future, so it's a matter of updating/correcting the internal "universe" model with the inputs and making the modelled "driver" outputs reality so that the world outside matches the desired world/future inside. As long as nothing weird happens, the driver is in the "zone" and doesn't actually have to think much.

    Maybe consciousness is what happens when you recursively simulate yourself (but would a special computer be required for the true effect? e.g. a quantum parallel computer).
  • (Score: 0) by Anonymous Coward on Tuesday February 24 2015, @04:58PM

    by Anonymous Coward on Tuesday February 24 2015, @04:58PM (#149179)

    I am not impressed ... wake me up when a robot can have the sublime skills of Ayrton Senna!!

    • (Score: 1) by m2o2r2g2 on Wednesday February 25 2015, @02:52AM

      by m2o2r2g2 (3673) on Wednesday February 25 2015, @02:52AM (#149402)

      The same driver that crashed into a wall and killed himself?

      I have great respect for the late legend, but your choice of driver does not support your point (or I missed the sarcasm).

  • (Score: 3, Interesting) by modest on Tuesday February 24 2015, @05:30PM

    by modest (3494) on Tuesday February 24 2015, @05:30PM (#149199)

    The win by 0.4 seconds can probably be attributed to the missing weight of the driver.