Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday May 28 2019, @11:08PM   Printer-friendly
from the chips-will-soon-design-themselves dept.

From ieee spectrum

Engineers at Georgia Tech say they've come up with a programmable prototype chip that efficiently solves a huge class of optimization problems, including those needed for neural network training, 5G network routing, and MRI image reconstruction. The chip's architecture embodies a particular algorithm that breaks up one huge problem into many small problems, works on the subproblems, and shares the results. It does this over and over until it comes up with the best answer. Compared to a GPU running the algorithm, the prototype chip—called OPTIMO—is 4.77 times as power efficient and 4.18 times as fast.

[...] The test chip was made up of a grid of 49 "optimization processing units," cores designed to perform ADMM and containing their own high-bandwidth memory. The units were connected to each other in a way that speeds ADMM. Portions of data are distributed to each unit, and they set about solving their individual subproblems. Their results are then gathered, and the data is adjusted and resent to the optimization units to perform the next iteration. The network that connects the 49 units is specifically designed to speed this gather and scatter process.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by takyon on Tuesday May 28 2019, @11:15PM (4 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday May 28 2019, @11:15PM (#848694) Journal

    Compared to a GPU running the algorithm, the prototype chip—called OPTIMO—is 4.77 times as power efficient and 4.18 times as fast.

    Is a GPU better than a "TPU" or ASIC at these problems?

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by RS3 on Wednesday May 29 2019, @12:11AM (2 children)

    by RS3 (6367) on Wednesday May 29 2019, @12:11AM (#848705)

    That's a really good question. An ASIC will be fastest, if the task is suited to a bunch of logic. And many companies are doing a combination of ASICs (FPGA), CPU, TPU, GPU, whatever else, gives the most computing power. But you need a really good overall system architect- someone who understands both software and hardware, to plan it out. Where's Seymour Cray!

    • (Score: 0) by Anonymous Coward on Wednesday May 29 2019, @12:33AM (1 child)

      by Anonymous Coward on Wednesday May 29 2019, @12:33AM (#848711)

      If Musk is to be believed, he hired the modern equivalent of Cray to design the Tesla self-driving hardware. See recent video press conference on autonomy, where Musk claims they have several years head start on their competition.

      • (Score: 0) by Anonymous Coward on Wednesday May 29 2019, @12:35AM

        by Anonymous Coward on Wednesday May 29 2019, @12:35AM (#848712)

        Is is NOT to be believed.

  • (Score: 0) by Anonymous Coward on Wednesday May 29 2019, @04:40PM

    by Anonymous Coward on Wednesday May 29 2019, @04:40PM (#848974)

    Is a GPU better than a "TPU" or ASIC at these problems?

    Is a TPU not an ASIC and can you not implement ADMM on a TPU? Go not to AC's for council for they will say both yes [stanford.edu] and no. [arxiv.org]