from the you-sure-it's-not-for-crypto-mining dept.
Over the last few years, Tesla has had a clear focus on computing power both inside and outside its vehicles.
Inside, it needs computers powerful enough to run its self-driving software, and outside, it needs supercomputers to train its self-driving software powered by neural nets that are fed an insane amount of data coming from the fleet.
CEO Elon Musk has been teasing Tesla’s Dojo project, which apparently consists of a supercomputer capable of an exaFLOP, one quintillion (1018) floating-point operations per second, or 1,000 petaFLOPS – making it one of the most powerful computers in the world.
Tesla has been working on Dojo for the last few years, and Musk has been hinting that it should be ready by the end of this year.
[...] [Andrej] Karpathy, [Tesla’s head of AI] commented on the effort:
“We have a neural net architecture network and we have a data set, a 1.5 petabytes data set that requires a huge amount of computing. So I wanted to give a plug to this insane supercomputer that we are building and using now. For us, computer vision is the bread and butter of what we do and what enables Autopilot. And for that to work really well, we need to master the data from the fleet, and train massive neural nets and experiment a lot. So we invested a lot into the compute. In this case, we have a cluster that we built with 720 nodes of 8x A100 of the 80GB version. So this is a massive supercomputer. I actually think that in terms of flops, it’s roughly the number 5 supercomputer in the world.”
Karpathy’s presentation at Conference on Computer Vision and Pattern Recognition 2021.
Related Stories
On Wednesday, the Future of Life Institute published an open letter on its website calling on AI labs to "immediately pause for at least 6 months the training of AI systems more powerful than GPT-4." Signed by Elon Musk and several prominent AI researchers, the letter quickly began to draw attention in the press—and some criticism on social media.
Earlier this month, OpenAI released GPT-4, an AI model that can perform compositional tasks and allegedly pass standardized tests at a human level, although those claims are still being evaluated by research. Regardless, GPT-4 and Bing Chat's advancement in capabilities over previous AI models spooked some experts who believe we are heading toward super-intelligent AI systems faster than previously expected.
See Also: FTC Should Stop OpenAI From Launching New GPT Models, Says AI Policy Group
Related:
OpenAI Is Now Everything It Promised Not to Be: Corporate, Closed-Source, and For-Profit (March 2023)
OpenAI's New ChatGPT Bot: 10 "Dangerous" Things it's Capable of (Dec. 2022)
Elon Musk Says There Needs to be Universal Basic Income (Aug. 2021)
Tesla Unveils Chip to Train A.I. Models Inside its Data Centers (Aug. 2021)
Elon Musk Reveals Plans to Unleash a Humanoid Tesla Bot (Aug. 2021)
Tesla Unveils its New Supercomputer (5th Most Powerful in the World) to Train Self-Driving AI (June 2021)
OpenAI Has Released the Largest Version Yet of its Fake-News-Spewing AI (Sept. 2019)
There's Still Time To Prevent Biased AI From Taking Over The World (May 2019)
The New Prometheus: Google CEO Says AI is More Profound than Electricity or Fire (Feb. 2018)
OpenAI Bot Bursts Into the Ring, Humiliates Top Dota 2 Pro Gamer in 'Scary' One-on-One Bout (Aug. 2017)
Elon Musk: Mark Zuckerberg's Understanding of AI is "Limited" (July 2017)
AI Software Learns to Make AI Software (Jan. 2017)
Elon Musk, Stephen Hawking Win Luddite Award as AI "Alarmists" (Jan. 2016)
Elon Musk and Friends Launch OpenAI (Dec. 2015)
Musk, Wozniak and Hawking Warn Over AI Warfare and Autonomous Weapons (July 2015)
More Warnings of an AI Doomsday — This Time From Stephen Hawking (Dec. 2014)
(Score: 4, Interesting) by Runaway1956 on Wednesday June 23 2021, @08:16AM (10 children)
All that computing power, and Elon hobbles it by insisting he doesn't need a variety of sensors. Give the computer at least one of every kind of sensor that might be useful, so that it can see in spectrums that humans can't see in. Maybe Elon is mostly right - he doesn't need 25 lidar sensors peering in all directions. But, give it at least on forward looking lidar, that might work when visible spectrum sensors don't. Infrared, radar and sonar, ditto.
If you want cars to drive themselves better than humans, give those cars super-human powers!
Abortion is the number one killed of children in the United States.
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @08:52AM (2 children)
I think we mostly agree, but "lidar" uses visible spectrum, so it is itself a visible spectrum sensor.
(Score: 2) by c0lo on Wednesday June 23 2021, @09:16AM (1 child)
False, it's infrared in the near-IR range.
https://www.youtube.com/watch?v=aoFiw2jMy-0
(Score: 1, Informative) by Anonymous Coward on Thursday June 24 2021, @01:41AM
Technically true, but NIR behaves more or less like "visible" light for these conditions. There are some subtleties regarding water vapor, but a NIR sensor isn't going to add significant improvement over a visible (i.e., silicon) sensor. Despite having "infrared" in its name, it is not thermal infrared and thus can't "see" what most people think of when they think "infrared". One of the big benefits to using NIR for lidar is that it can't be seen by the eye, so it is easy to be "eye safe" as laser radiation.
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @09:15AM (2 children)
And multiply the number of points of failure. TANSTAAFL.
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @09:58AM (1 child)
Here is an interesting link for anyone involved in AI-style learning: https://www.damninteresting.com/on-the-origin-of-circuits/ [damninteresting.com]
The long and short is that, yes, AI can solve problems but in ways that are completely specific to one case. Add another camera or lose a camera or change anything and chances are the AI you created with your New Supercomputer is useless.
(Score: 4, Insightful) by c0lo on Wednesday June 23 2021, @11:53AM
Adversarial attacks show that changing a few (well chosen) pixels will do enough.
https://www.youtube.com/watch?v=aoFiw2jMy-0
(Score: 1) by khallow on Wednesday June 23 2021, @11:36AM (1 child)
The more sensors you have, the more broken sensors you will have. And that will mean degradation of car performance and/or lawsuits.
Sounds like Musk is already doing that.
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @05:00PM
i am sure that with enough training and supercomputer a simple single yes/no -or- light/no-light sensor will drive you around the world ...
meor seriously tho a "sensor array" could be accepted by court as "a single thing"?
also not an expert but surely having more sensors (oops i mean more sensor elements in the single array) is a good thing but after a certain point with adding more leads to diminishing returns (but redundancy, say mud splatter or flying rat dropings or dirtnest wasps)?
(Score: 2) by Tork on Wednesday June 23 2021, @06:45PM (1 child)
Slashdolt Logic: "25 year old jokes about sharks and lasers are +5, Funny." 💩
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @07:34PM
Too true. And, we still don't understand everything about how it all works together.
(Score: 5, Insightful) by bradley13 on Wednesday June 23 2021, @08:47AM (14 children)
Given the massive increase in computing power, neural nets can deliver some amazing results. Throw data at them, and see what they produce, amiright? I mean, what could go wrong?
Doing symbolic AI is a lot harder: actually deriving, understanding and writing down the rules that a system must follow. This isn't boosting nearly as much by GHz processors, so the results are lagging way behind neural nets.
The thing is: You actually have no idea what a neural net is doing, or why. Chances are good that a neural net has learned some strange things. You may think it has learned "stop at red octagonal signs", but maybe the training data coincidentally always had a tree on the corner, and the AI has actually learned "stop when there is a sign and a tree". Which works great, right up until it doesn't.
I wouldn't trust a neural net with any sort of important decision, but it sounds like that's exactly what Tesla is doing.
Everyone is somebody else's weirdo.
(Score: 2) by c0lo on Wednesday June 23 2021, @09:18AM (12 children)
https://www.youtube.com/watch?v=aoFiw2jMy-0
(Score: 2) by bzipitidoo on Wednesday June 23 2021, @09:31AM (4 children)
If it looks like a dog and barks like a dog, then it is a dog. Until it isn't.
Roadkill, anyone? I hear it's good with lots of BBQ sauce.
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @10:00AM
Roadkill? The AI tells me it's a stop sign. STOP!
(Score: 1, Interesting) by Anonymous Coward on Wednesday June 23 2021, @11:27AM (1 child)
A few pixels subtly modified by an adversarial attack and the AI will take a real dog for Trump. Do you think it will be intelligent enough to floor the acceleration?
(Score: 3, Funny) by DannyB on Wednesday June 23 2021, @04:38PM
If it looks like a dog, and meows like a dog, then it is a blue screen.
While Republicans can get over Trump's sexual assaults, affairs, and vulgarity; they cannot get over Obama being black.
(Score: 2) by DannyB on Wednesday June 23 2021, @04:37PM
Marketing says we must stop calling it Roadkill. Call it Road Pizza.
While Republicans can get over Trump's sexual assaults, affairs, and vulgarity; they cannot get over Obama being black.
(Score: 2) by crafoo on Wednesday June 23 2021, @10:57AM (6 children)
yeah, and everyone is doing it because human-designed and coded AI is an utter and complete failure. It's like trying to predict tidal flows using Newtonian Mechanics instead of collecting data and building a practical, numerical model. Some types of (rigid and absolutist) thinkers prefer the former, and they always fail.
(Score: 1, Touché) by Anonymous Coward on Wednesday June 23 2021, @02:12PM
> always fail
until they don't, then everyone knew it all along.
(Score: 2, Disagree) by PiMuNu on Wednesday June 23 2021, @04:18PM (2 children)
Doesn't mean that these multivariate analysis routines have any better chance of succeeding.
Much better concept is to have the stop sign broadcast "I am a stop sign" and build out the extra infrastructure to support that. Takes a decade to build up the infrastructure, but it has a chance of succeeding. One still has to identify passive objects (pedestrians, fallen trees, etc) but the failure rate would be a few orders of magnitude smaller.
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @05:43PM (1 child)
> "I am a stop sign"
Not worth the power (or battery/solar cell) to install. These will be hacked so frequently as to be completely undependable, and therefore useless. If you thought yahoos shooting holes in stop signs was bad, this would be a lot worse. All the layabouts will be sitting back watching the autonomous cars crashing.
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @08:53PM
Man your hacker's are remarkable unselfish.
It would seem much more advantageous to just broadcast "stop signs" to any car potentially crossing the hacker's car's path. Then the hacker could drive unimpeded.
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @05:04PM
they should really start adding A.I. to those 'em D&D dices ...
(Score: 2) by bradley13 on Wednesday June 23 2021, @07:37PM
You can automatically derive symbolic rules. They can also take various forms: logic, fuzzy logic, bayesian, whatever. The point is the explicit, understandable representation.
Everyone is somebody else's weirdo.
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @10:05AM
> Throw data at them, and see what they produce, amiright?
This is what management does. Take the most dull and obvious thing, and elevate it to cult status requiring millions of dollars of bonuses. Then crow about it in a loud voice to people trapped there by their need to make a living.
(Score: 2) by isostatic on Wednesday June 23 2021, @09:50AM (3 children)
The first breakthrough in true computer vision came from a university. The newest video game consoles came out, and these consoles had extremely powerful CPUs able to process 10 trillion operations per second. By adding 100 gigabytes of RAM to the console and then networking 1,000 of these video game consoles together, a university research team created a machine able to process 10 quadrillion operations per second on 100 trillion bytes of RAM. They had created a $500,000 machine with processing power approaching that of a human brain. With that much processing power and memory on tap, the researchers were finally able to start creating real vision processing algorithms.
Within a year they had two demonstration projects that got a lot of media attention. The first was an autonomous humanoid robot that, given an apartment number, could walk through a city, find the building, ride the elevator or walk up the steps and knock on that apartment door. The second was a car that could drive itself door-to-door in rush hour traffic without any human intervention. By combining the walking robot and the self-driving car, the researchers demonstrated a completely robotic delivery system for a pizza restaurant. In a widely reported publicity stunt, the research team ordered a pizza and had it delivered by robot to their lab 25 minutes later.
A choice of two distopian futures
https://marshallbrain.com/manna1 [marshallbrain.com]
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @10:07AM
And then...? After grabbing the headlines, they cashed in and took up golf / motivational speaking.
(Score: 2) by DannyB on Wednesday June 23 2021, @04:26PM
Manna may be soon.
But pizza will probably be ordered more frequently because it has more choices of toppings than manna. [wikipedia.org]
Manna did not need to be ordered. It was delivered nightly*.
*except Friday night. On Thursday nights a double portion was delivered.
While Republicans can get over Trump's sexual assaults, affairs, and vulgarity; they cannot get over Obama being black.
(Score: 2) by dwilson on Wednesday June 23 2021, @08:21PM
Manna won't be, ever.
It was an eerily scary read, to start out with. By the end, it was a joke. It was a free short story, and I still paid too much.
What I mean by that is, new technology for AI-vision, related advances in AI, and evil corporate overlords pulling it all together and using it to replace minimum-wage workers, or over-manage a fast-food restaurant, are entirely plausible outcomes. It's believable. You read that, you look at the current state of computer technology, you remember human beings are human beings, and you think, sure, this could potentially happen.
Then it shifts to robot-patrolled "concentration-camp hotel-jails" for societies out-of-work underclasses, and that was a hard swerve off the road-of-the-possible and in to la-la land.
And then, some sort of post-scarcity bullshit in Australia, where women and riches flow like water and life is just hunky-dory and you don't even have to exercise to keep in shape because the computer-in-your-head will do it for you? While the rest of the world that hasn't gone that perfect-society route somehow ignores it and lets it thrive without moving in to pillage it? Please. The entire premise for the ending was so far off the deep-end past la-la land that I can't even coherently describe how absurd it was.
- D
(Score: 0) by Anonymous Coward on Wednesday June 23 2021, @10:14AM (1 child)
Then he'll use the infinite riches that generates to buy an even better computer for watching porn.
(Score: 2) by DannyB on Wednesday June 23 2021, @04:29PM
You'reYour doing it wrong. With that much money wouldn't he simply hire workers that make house calls to perform services? Or buy mail order brides, who are always immigrants, as some presidents have been known to do?While Republicans can get over Trump's sexual assaults, affairs, and vulgarity; they cannot get over Obama being black.