Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.

Submission Preview

Link to Story

Faster deep learning?

Accepted submission by hendrikboom mailto:hendrik@topoi.pooq.com at 2021-04-09 20:04:40 from the algorithms dept.
Hardware

Someone claims they can get do deep learning faster and cheaper on a commodity CPU than with a GPU.

The press release [rice.edu] actually contains links to the original papers!

“The whole industry is fixated on one kind of improvement — faster matrix multiplications,” Shrivastava said. “Everyone is looking at specialized hardware and architectures to push matrix multiplication. People are now even talking about having specialized hardware-software stacks for specific kinds of deep learning. Instead of taking an expensive algorithm and throwing the whole world of system optimization at it, I’m saying, ‘Let’s revisit the algorithm.'”

Shrivastava’s lab did that in 2019, recasting DNN training as a search problem that could be solved with hash tables. Their “sub-linear deep learning engine” (SLIDE) is specifically designed to run on commodity CPUs, and Shrivastava and collaborators from Intel showed it could outperform GPU-based training when they unveiled it at MLSys 2020.

Can we hope that this will release the GPU inventories for actual gamers?


Original Submission