from the listening-and-learning-and-yearning-to-run dept.
Tech Review is running a piece on a new/recent approach to self driving, https://www.technologyreview.com/2022/05/27/1052826/ai-reinforcement-learning-self-driving-cars-autonomous-vehicles-wayve-waabi-cruise/
Four years ago, Alex Kendall sat in a car on a small road in the British countryside and took his hands off the wheel. The car, equipped with a few cheap cameras and a massive neural network, veered to the side. When it did, Kendall grabbed the wheel for a few seconds to correct it. The car veered again; Kendall corrected it. It took less than 20 minutes for the car to learn to stay on the road by itself, he says.
This was the first time that reinforcement learning—an AI technique that trains a neural network to perform a task via trial and error—had been used to teach a car to drive from scratch on a real road. It was a small step in a new direction—one that a new generation of startups believes just might be the breakthrough that makes driverless cars an everyday reality.
Reinforcement learning has had enormous success producing computer programs that can play video games and Go with superhuman skill; it has even been used to control a nuclear fusion reactor. But driving was thought to be too complicated. "We were laughed at," says Kendall, founder and CEO of the UK-based driverless-car firm Wayve.
Wayve now trains its cars in rush-hour London. Last year, it showed that it could take a car trained on London streets and have it drive in five different cities—Cambridge (UK), Coventry, Leeds, Liverpool, and Manchester—without additional training. That's something that industry leaders like Cruise and Waymo have struggled to do. This month Wayve announced it is teaming up with Microsoft to train its neural network on Azure, the tech giant's cloud-based supercomputer.
Some of the other players in this field are training their neural networks (NN) in driving simulators (still with humans as the "instructor") instead of on the road as described above.
My question is can the neural net ever get better than the person(s) that trained it? If the human (trainer) nearly misses an accident, is that what the NN will also do? Worse, I hope that they have a way of rewinding the training to some time before there is an actual accident, wouldn't want this in the training set!
I don't see that this "2.0" approach has any possibility of realizing the early hype of "zero accidents" that robot driving advocates are always going on about, but happy to hear otherwise. At best it seems like it might become nearly as good as the humans doing the training--but this would take a lot of time on the road.
(Score: 0) by Anonymous Coward on Monday May 30 2022, @06:11PM (8 children)
Lucas Self-Driving Cars.
(Score: 2, Interesting) by Anonymous Coward on Monday May 30 2022, @06:26PM
While I get the reference to the "prince of darkness", my experience with Lucas electrics was OK. I owned and re-built/maintained a couple of Norton motorcycles from the early-mid 1970s (specifically a 750 and an 850). This was in the early 1980s and while I had plenty of things that needed fixing, the Lucas parts were all fine.
I wonder if it would even be possible to use a self-driving NN trained in the UK anywhere else? They drive on the left (along with Japan and a few other countries). Switching everything Left-to-Right isn't going to work, there are enough things that don't switch sides, like the words on signs that read L-->R (in most, but not all, countries).
(Score: 1) by liar on Monday May 30 2022, @08:48PM (4 children)
or maybe Johnny cabs...
https://www.youtube.com/watch?v=eWgrvNHjKkY [youtube.com]
Noli nothis permittere te terere.
(Score: 2) by captain normal on Tuesday May 31 2022, @05:49AM (3 children)
Speaking of Johnny Cabs, Any one else ever notice the similarity between the Johnny Cab driver and Elon Musk?
https://www.forbes.com/profile/elon-musk/?sh=1a6508e67999 [forbes.com]
"It is easier to fool someone than it is to convince them that they have been fooled" Mark Twain
(Score: 1) by liar on Tuesday May 31 2022, @02:58PM (2 children)
No I hadn't!
I occasionally flash on Johnny Cab when I get in a DiDi though.
Non sequitur: It can be uncomfortable to show your passport to TSA and say "Multipass!"...
Noli nothis permittere te terere.
(Score: 2) by captain normal on Thursday June 02 2022, @09:52PM (1 child)
If you look like Milla Jovovich did in "Fifth Element", You should have no problem saying "Multipass!".
"It is easier to fool someone than it is to convince them that they have been fooled" Mark Twain
(Score: 1) by liar on Friday June 03 2022, @03:53PM
Yeah... no, that I do not. 67/m/cali.
Noli nothis permittere te terere.
(Score: 2) by captain normal on Tuesday May 31 2022, @05:37AM (1 child)
Ah...the good old "Prince of Darkness". I'm not sure I want to drive a Lucas powered electric car.
I really don't care for a self driving car. All I really want is a vehicle that will cruise at 75~80 MPH, go at least 200 miles on a charge and can recharge in 10 minutes. Anything beyond that is a waste of time and money. Speaking of money, I want a car I can afford on my income (which isn't in the 6 figure range).
"It is easier to fool someone than it is to convince them that they have been fooled" Mark Twain
(Score: 0) by Anonymous Coward on Tuesday May 31 2022, @11:26AM
No problem, there are a wide variety of liquid charged vehicles available that meet your requirements right now, including reasonable pricing (both new and used). Amazingly the required liquid for recharging has been developed over the last century+ and is widely available. As a bonus the charging time is often closer to five minutes, and, extra bonus it can be paid for anonymously with cash!
(Score: 5, Insightful) by SomeGuy on Monday May 30 2022, @06:15PM (2 children)
I'd laugh right now, if this shit weren't so morbidly stupid and overhyped.
First of all, you still have the problem that you don't know what the fuck "AI" learned.
Second of all, once cars are "self driving", as the per the general public's definition, there will be NO ONE AT THE WHEEL to correct the AI.
At best, car companies will feed accident data postmortem to the AI where the AI will "learn" either to drive like grandmas or decide they like the accident data and go all KILL ALL HUMANS.
That depends on your definition of "better". Faster? Probably. Non-stop? Yes. More accurate? Depends on the simplicity of the data set. More advertising? Absolutely.
(Score: 0) by Anonymous Coward on Tuesday May 31 2022, @02:55PM (1 child)
You say "driving like grandmas" like it's a bad thing. You may as well have made a sign, "I like driving because I can powertrip."
Statisticly speaking, you do not get old by being stupid, so grandma's probably got a better point than you.
(Score: 2) by SomeGuy on Tuesday May 31 2022, @06:11PM
You would think, but you know damn well there will be demand for cars to move faster. For some people time=money so shaving even a few minutes off of a trip can be a big deal. Fully expect to see "modified" vehicles, regardless if legal or not, that screw the "AI" and any safety measures.
(Score: 5, Insightful) by bradley13 on Monday May 30 2022, @07:10PM (6 children)
Once upon a dark age, I did ML research. Yes, an ML system can be better than any (single) trainer - it can combine information from multiple trainers. However, I remain skeptical of neural nets.
First problem: training critical situations that occur rarely (and each is likely to be different). Example: another vehicle is about to hit you. Likely, you need to do something that is otherwise not allowed - going against the mass of training in normal situations.
Second, you never know what the network has *actually* learned. Maybe, by coincidence, most stop signs in your training area are near trees. Do trees become part of the recognition for stop signs? There's no way to know, until the day the network fails.
Everyone is somebody else's weirdo.
(Score: 3, Interesting) by MIRV888 on Monday May 30 2022, @07:26PM (2 children)
The 'way outside the norms of safe driving' decision is essentially the trolley dilemma. A deer runs out into your lane approaching a blind turn. Do you swerve into the oncoming lane to avoid and risk a catastrophic head on? We (humans) will have to designate that ethical line in AI decision making.
I don't envy that choice.
(Score: 1, Interesting) by Anonymous Coward on Monday May 30 2022, @07:36PM
> ... blind turn
That particular problem has been solved here, the highway department took down all the trees that made the corners blind in the first place.
Of course it was done in the name of "safety", but as far as I can tell the loss of small-road rural charm/beauty created by the old trees wasn't included in the calculation at all. I also heard that the highway crews took the logs home for firewood.
(Score: 3, Insightful) by takyon on Monday May 30 2022, @11:39PM
The self-driving car needs to play to its strengths, such as having a faster reaction time than a human driver and not being distracted. LIDAR can also see around corners in some cases. The sooner you can respond, the more options you have.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2, Insightful) by Anonymous Coward on Monday May 30 2022, @09:25PM
> Second, you never know what the network has *actually* learned. Maybe, by coincidence, most stop signs in your training area are near trees. Do trees become part of the recognition for stop signs? There's no way to know, until the day the network fails.
In school (the early 90's) we heard a cautionary tale in one of our lectures about a DoD project where the AI was supposed to differentiate photos containing tanks from those without. All the pictures, in the training set, containing tanks were taken later in the day than the pictures without tanks. The researchers initially thought they had a stupendous success, with the AI correctly identifying tanks vs. no tank at a very high rate. Additional tests were done where the AI failed miserably. Eventually the people figured it out. The training and initial test data with tanks had long shadows in the pictures due to the time of day, while the training and initial test data, taken earlier in the day, without tanks had shorter shadows. The only thing the AI had figured out was detecting the length of shadows in the pictures.
These simple statistical pattern matching machines are going to fail miserably on novel input. And, with several tons of machinery at their control, when they fail, people and animals will be hurt/die as a result of their failures. There should be unlimited liability for those putting these things on the road. The irresponsible clowns like Uber shouldn't have had a second chance; and Tesla shouldn't be permitted to sell any driving assist tech not called, "poor quality cruise control that might kill you or those around you" in all their marketing).
(Score: 1, Interesting) by Anonymous Coward on Monday May 30 2022, @10:10PM
Bradley13 wrote:
Did you ever come across any research into chaotic or catastrophic behaviour in Machine Learning / Neural Network / AI systems? I mean in the formal mathematical sense of Chaos Theory and Catastrophe Theory?
I've never worked on any of these, but from the point of view of a software developer seeing the descriptions of how they are meant to work my expectation is that Neural Nets will be subject to chaotic and catastrophic behaviour, e.g., the smallest possible change in one cell could change all cats into dogs. I've never seen any hint of anyone doing any theoretic research into how stable these systems are likely to be.
If there is chaotic or catastrophic behaviour, just making the neural net bigger, deeper, more convoluted or whatever won't help deal with chaotic and catastrophic behaviour. It'll just bury it deeper, so it will be more of a surprise and even less explicable when it does occur.
(Score: 0) by Anonymous Coward on Tuesday May 31 2022, @11:42AM
A possible "third" to add to your first two problems -- what tests are needed to validate a self driving vehicle.
To that end ASAM e.V. (Association for Standardization of Automation and Measuring Systems) has just released a document that:
https://www.asam.net/news-media/news/detail/news/validating-autonomous-vehicles-and-driving-functions-safely-and-reliably/ [asam.net]
The sparce matrix of tests has columns with nine different test environments, starting with "Model in the Loop" and ending with "Open Road Testing and Field Monitoring". There are five rows of test methods such as "Requirements based test" and "Fault Injection" and a total of 26 different types of tests in the matrix.
My personal take? This German-developed document will be ignored by the VC funded self-driving companies...and will then be used to skewer them when they come on trial. On the other hand, the traditional car companies (likely led by Mercedes) will get to work and deal with the required effort--or else they will drop out of the game.
(Score: 4, Interesting) by MIRV888 on Monday May 30 2022, @07:19PM (4 children)
If you can program a car with the aggregate data of all other vehicles driven with the same software (including wrecks), it could also learn the catastrophic consequences and what led to them. At some point you should reach a critical mass of 'learning' that would make a vehicle road safe. The statistics would have to show it wrecks less often than humans. You will never achieve zero wrecks.
On the down side, this is also how you get HAL 9000.
(Score: 1, Insightful) by Anonymous Coward on Monday May 30 2022, @07:45PM (1 child)
inquiring minds want to know:
What series of mental steps did you use to get from "...program a car with the aggregate data..." to "...this is also how you get HAL 9000."
I don't see any connection at all except that both fit into the broad catch-all of AI. First discrepancy is that A might actually be tried in the future (Musk is sort of doing that now by aggregating recordings of Tesla drivers), where B was a fiction created by Arthur C. Clarke and the movie script writers.
(Score: 2) by pkrasimirov on Monday May 30 2022, @08:26PM
The point is that by self-reinforced learning a machine can find its way to any target. And when people set the target priorities wrongly like forgetting the "don't kill" rule it could become HAL. Because it is ridiculously easy to forget the obvious values, even unavoidable, I'd say. Remember that self-learning MS AI chatbot that turned so racist they had to switch it off and kill the project in shame? There wasn't a "don't be racist" rule. It can "learn" any kind of traits like that, be homophobic, pedo, pro-terrorist, sadistic etc. If this was a person it would be a psychopath by definition: "persistent antisocial behavior, impaired empathy and remorse, and bold, disinhibited, and egotistical traits." AI has no innate social behavior, empathy or remorse rules unless someone explicitely orders it to have them.
(Score: 3, Interesting) by HiThere on Monday May 30 2022, @08:52PM (1 child)
That really depends on what sensors are available, and several other unspecified variables. Apparently currently automated cars aren't even that good at lane keeping, so I wouldn't be too optimistic.
Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
(Score: 3, Insightful) by requerdanos on Monday May 30 2022, @11:04PM
That right there is good advice for the whole topic of self-driving cars.
I hereby officially add laughter to the total. Driving is a hard problem. You haven't solved it, even if you've made a system that is in some ways less bad than other systems that also haven't solved it.
(Score: 1) by Zappy on Tuesday May 31 2022, @07:38AM
Didn't Comma AI do something similar, running it on a cheap phone.