Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday November 25 2019, @05:21AM   Printer-friendly
from the what-will-they-think-of-next dept.

Cerebras Unveils First Installation of Its AI Supercomputer at Argonne National Labs

At Supercomputing 2019 in Denver, Colo., Cerebras Systems unveiled the computer powered by the world's biggest chip. Cerebras says the computer, the CS-1, has the equivalent machine learning capabilities of hundreds of racks worth of GPU-based computers consuming hundreds of kilowatts, but it takes up only one-third of a standard rack and consumes about 17 kW. Argonne National Labs, future home of what's expected to be the United States' first exascale supercomputer, says it has already deployed a CS-1. Argonne is one of two announced U.S. National Laboratories customers for Cerebras, the other being Lawrence Livermore National Laboratory.

The system "is the fastest AI computer," says CEO and cofounder Andrew Feldman. He compared it with Google's TPU clusters (the 2nd of three generations of that company's AI computers), noting that one of those "takes 10 racks and over 100 kilowatts to deliver a third of the performance of a single [CS-1] box."

The CS-1 is designed to speed the training of novel and large neural networks, a process that can take weeks or longer. Powered by a 400,000-core, 1-trillion-transistor wafer-scale processor chip, the CS-1 should collapse that task to minutes or even seconds. However, Cerebras did not provide data showing this performance in terms of standard AI benchmarks such as the new MLPerf standards. Instead it has been wooing potential customers by having them train their own neural network models on machines at Cerebras.

[...] The CS-1's first application is in predicting cancer drug response as part of a U.S. Department of Energy and National Cancer Institute collaboration. It is also being used to help understand the behavior of colliding black holes and the gravitational waves they produce. A previous instance of that problem required 1024 out of 4392 nodes of the Theta supercomputer.

Also at TechCrunch, VentureBeat, and Wccftech.

Previously: Cerebras "Wafer Scale Engine" Has 1.2 Trillion Transistors, 400,000 Cores


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by takyon on Monday November 25 2019, @06:20AM (2 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday November 25 2019, @06:20AM (#924425) Journal

    The software can perform that optimization problem across multiple computers, allowing a cluster of computers to act as one big machine. Cerebras has linked as many as 32 CS-1s together to get a roughly 32-fold performance increase. This is in contrast with the behavior of GPU-based clusters, says Feldman. “Today, when you cluster GPUs, you don't get the behavior of one big machine. You get the behavior of lots of little machines.”

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Monday November 25 2019, @07:01AM (1 child)

    by Anonymous Coward on Monday November 25 2019, @07:01AM (#924434)

    Imagination fail.