Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday August 20 2019, @09:11PM   Printer-friendly
from the not-going-to-fit-in-a-cell-phone dept.

The five technical challenges Cerebras overcame in building the first trillion transistor chip

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today — and it is a doozy. The "Wafer Scale Engine" is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative).

It's made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry's big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees. You can read more about the chip from Tiernan Ray at Fortune and read the white paper from Cerebras itself.

Also at BBC, VentureBeat, and PCWorld.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by takyon on Tuesday August 20 2019, @09:39PM

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday August 20 2019, @09:39PM (#882803) Journal

    Still better than the equivalent amount of split up chips.

    They could also use algorithms that distribute data closer to other related data.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2