Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Wednesday November 16 2022, @12:06PM   Printer-friendly
from the it's-only-wafer-thin dept.

Hungry for AI? New supercomputer contains 16 dinner-plate-size chips

On Monday, Cerebras Systems unveiled its 13.5 million core Andromeda AI supercomputer for deep learning, reports Reuters. According Cerebras, Andromeda delivers over one 1 exaflop (1 quintillion operations per second) of AI computational power at 16-bit half precision.

The Andromeda is itself a cluster of 16 Cerebras C-2 computers linked together. Each CS-2 contains one Wafer Scale Engine chip (often called "WSE-2"), which is currently the largest silicon chip ever made, at about 8.5-inches square and packed with 2.6 trillion transistors organized into 850,000 cores.

Cerebras built Andromeda at a data center in Santa Clara, California, for $35 million. It's tuned for applications like large language models and has already been in use for academic and commercial work. "Andromeda delivers near-perfect scaling via simple data parallelism across GPT-class large language models, including GPT-3, GPT-J and GPT-NeoX," writes Cerebras in a press release.

Previously: Cerebras "Wafer Scale Engine" Has 1.2 Trillion Transistors, 400,000 Cores
Cerebras Systems' Wafer Scale Engine Deployed at Argonne National Labs
Cerebras More than Doubles Core and Transistor Count with 2nd-Generation Wafer Scale Engine
The Trillion-Transistor Chip That Just Left a Supercomputer in the Dust


Original Submission

Related Stories

Cerebras "Wafer Scale Engine" Has 1.2 Trillion Transistors, 400,000 Cores 19 comments

The five technical challenges Cerebras overcame in building the first trillion transistor chip

Superlatives abound at Cerebras, the until-today stealthy next-generation silicon chip company looking to make training a deep learning model as quick as buying toothpaste from Amazon. Launching after almost three years of quiet development, Cerebras introduced its new chip today — and it is a doozy. The "Wafer Scale Engine" is 1.2 trillion transistors (the most ever), 46,225 square millimeters (the largest ever), and includes 18 gigabytes of on-chip memory (the most of any chip on the market today) and 400,000 processing cores (guess the superlative).

It's made a big splash here at Stanford University at the Hot Chips conference, one of the silicon industry's big confabs for product introductions and roadmaps, with various levels of oohs and aahs among attendees. You can read more about the chip from Tiernan Ray at Fortune and read the white paper from Cerebras itself.

Also at BBC, VentureBeat, and PCWorld.


Original Submission

Cerebras Systems' Wafer Scale Engine Deployed at Argonne National Labs 8 comments

Cerebras Unveils First Installation of Its AI Supercomputer at Argonne National Labs

At Supercomputing 2019 in Denver, Colo., Cerebras Systems unveiled the computer powered by the world's biggest chip. Cerebras says the computer, the CS-1, has the equivalent machine learning capabilities of hundreds of racks worth of GPU-based computers consuming hundreds of kilowatts, but it takes up only one-third of a standard rack and consumes about 17 kW. Argonne National Labs, future home of what's expected to be the United States' first exascale supercomputer, says it has already deployed a CS-1. Argonne is one of two announced U.S. National Laboratories customers for Cerebras, the other being Lawrence Livermore National Laboratory.

The system "is the fastest AI computer," says CEO and cofounder Andrew Feldman. He compared it with Google's TPU clusters (the 2nd of three generations of that company's AI computers), noting that one of those "takes 10 racks and over 100 kilowatts to deliver a third of the performance of a single [CS-1] box."

The CS-1 is designed to speed the training of novel and large neural networks, a process that can take weeks or longer. Powered by a 400,000-core, 1-trillion-transistor wafer-scale processor chip, the CS-1 should collapse that task to minutes or even seconds. However, Cerebras did not provide data showing this performance in terms of standard AI benchmarks such as the new MLPerf standards. Instead it has been wooing potential customers by having them train their own neural network models on machines at Cerebras.

[...] The CS-1's first application is in predicting cancer drug response as part of a U.S. Department of Energy and National Cancer Institute collaboration. It is also being used to help understand the behavior of colliding black holes and the gravitational waves they produce. A previous instance of that problem required 1024 out of 4392 nodes of the Theta supercomputer.

Also at TechCrunch, VentureBeat, and Wccftech.

Previously: Cerebras "Wafer Scale Engine" Has 1.2 Trillion Transistors, 400,000 Cores


Original Submission

Cerebras More than Doubles Core and Transistor Count with 2nd-Generation Wafer Scale Engine 20 comments

342 Transistors for Every Person In the World: Cerebras 2nd Gen Wafer Scale Engine Teased

One of the highlights of Hot Chips from 2019 was the startup Cerebras showcasing its product – a large 'wafer-scale' AI chip that was literally the size of a wafer. The chip itself was rectangular, but it was cut from a single wafer, and contained 400,000 cores, 1.2 trillion transistors, 46225 mm2 of silicon, and was built on TSMC's 16 nm process.

[...] Obviously when doing wafer scale, you can't just add more die area, so the only way is to optimize die area per core and take advantage of smaller process nodes. That means for TSMC 7nm, there are now 850,000 cores and 2.6 trillion transistors. Cerebras has had to develop new technologies to deal with multi-reticle designs, but they succeeded with the first gen, and transferred the learnings to the new chip. We're expecting more details about this new product later this year.

Previously: Cerebras "Wafer Scale Engine" Has 1.2 Trillion Transistors, 400,000 Cores
Cerebras Systems' Wafer Scale Engine Deployed at Argonne National Labs


Original Submission

The Trillion-Transistor Chip That Just Left a Supercomputer in the Dust 25 comments

The Trillion-Transistor Chip That Just Left a Supercomputer in the Dust:

So, in a recent trial, researchers pitted the chip—which is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1—against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design.

The trial was described in a preprint paper written by a team led by Cerebras's Michael James and NETL's Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, "It can tell you what is going to happen in the future faster than the laws of physics produce the same result."

The researchers said the CS-1's performance couldn't be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true "no matter how large the supercomputer is." At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. That's why Joule's simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores.

Previously:
Cerebras More than Doubles Core and Transistor Count with 2nd-Generation Wafer Scale Engine
Cerebras Systems' Wafer Scale Engine Deployed at Argonne National Labs
Cerebras "Wafer Scale Engine" Has 1.2 Trillion Transistors, 400,000 Cores


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by mcgrew on Wednesday November 16 2022, @03:45PM (3 children)

    by mcgrew (701) <publish@mcgrewbooks.com> on Wednesday November 16 2022, @03:45PM (#1280038) Homepage Journal

    Your phone will be more powerful.

    I've long said that I'll do the "smart home" thing when home electronics are powerful enough that I don't have to connect to Amazon's or Microsoft's or Google's supercomputers; after all, your phone is far more powerful than Cray's most powerful in 1972.

    But alas, it won't happen in the money-worshiping USA, or likely anywhere else, as I lately found out. I bought a Ring doorbell button, and I can't store the video on my own equipment (I have more than 6 TB), it's stored on Amazon's servers, and after two weeks if I don't subscribe, I can't store them, so good luck getting Alexa or Siri to work without the Fascist Big Brother corporations watching (under Fascism, government is controlled by industry. Citizens United, anyone?).

    --
    mcgrewbooks.com mcgrew.info nooze.org
    • (Score: 0) by Anonymous Coward on Wednesday November 16 2022, @05:54PM (1 child)

      by Anonymous Coward on Wednesday November 16 2022, @05:54PM (#1280063)

      I had this vision a couple of months ago of society as a corporate farm. Humans are like pigs being farmed for output. It's not a single "bad guy" doing it, it's the consequence of living in a rule-based system. Decades of smart people, and now computers, design our society - it's bigger than anyone can understand. You can fight it, for a while, but it's an inevitable consequence of our subconscious. Shitty societies eventually become democracies that eventually become humans farms corralled and chaperoned by corporations that act up to the limit of the law. Like pigs in cages given just enough sustenance to prevent them dying. Breeding sows must keep turning out litters. Young piglets must jump through turnstyles on command, or get the electro prod. Nice piggies.

      • (Score: 0) by Anonymous Coward on Thursday November 17 2022, @01:55AM

        by Anonymous Coward on Thursday November 17 2022, @01:55AM (#1280145)

        Already plenty of kids (piggies) out there to keep the world going.

        You can also choose to get off the train by not having kids, a choice I made for a variety of reasons...nearly 50 years ago.

        I have plenty of fun with the kids my friends and relatives have, it's very rare that I wish I'd made a different choice.

    • (Score: 0) by Anonymous Coward on Wednesday November 16 2022, @07:39PM

      by Anonymous Coward on Wednesday November 16 2022, @07:39PM (#1280075)

      The rules are simple, you want Amazon's convenience, you pay Amazon's price..

      You can smoke cheap cigarettes, or you can roll yer own

(1)