Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday November 16 2017, @01:09AM   Printer-friendly
from the core-value dept.

Qualcomm Launches 48-core Centriq for $1995: Arm Servers for Cloud Native Applications

Following on from the SoC disclosure at Hot Chips, Qualcomm has this week announced the formal launch of its new Centriq 2400 family of Arm-based SoCs for cloud applications. The top processor is a 48-core, Arm v8-compliant design made using Samsung's 10LPE FinFET process, with 18 billion transistors in a 398mm2 design. The cores are 64-bit only, and are grouped into duplexes – pairs of cores with a shared 512KB of L2 cache, and the top end design will also have 60 MB of L3 cache. The full design has 6 channels of DDR4 (Supporting up to 768 GB) with 32 PCIe Gen 3.0 lanes, support for Arm Trustzone, and all within a TDP of 120W and for $1995.

We covered the design of Centriq extensively in our Hot Chips overview, including the microarchitecture, security and new power features. What we didn't know were the exact configurations, L3 cache sizes, and a few other minor details. One key metric that semiconductor professionals are interested in is the confirmation of using Samsung's 10LPE process, which Qualcomm states gave them 18 billion transistors in a 398mm2 die (45.2MTr/mm2). This was compared to Intel's Skylake XCC chip on 14nm (37.5MTr/mm2, from an Intel talk), but we should also add in Huawei's Kirin 970 on TSMC 10nm (55MTr/mm2).

Previously: Qualcomm's Centriq 2400 Demoed: A 48-Core ARM SoC for Servers


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by JoeMerchant on Thursday November 16 2017, @12:43PM

    by JoeMerchant (3937) on Thursday November 16 2017, @12:43PM (#597653)

    As I mentioned, the Pi's I/O is lame - but the basic approach of a rack full of inexpensive ARM systems would still seem to address a lot of needs, even and perhaps especially cloud needs without resorting to the BIG IRON.

    IBM tried making BIG IRON systems for the cloud about 10 years ago (ahead of their time, right?) Their pitch was to create an (apparently) "overpowered" system which they would divvy up with hypervisor and then lease you the capacity you needed out of it. The appeal was supposed to be that the hardware would "scale" with demand and you only paid for what you used.

    We did an analysis of those IBM systems vs a rack of Mac Pros, and, in the end, if you were really using all the capacity of the system, the IBM was coming in at roughly 25% of the flops per dollar of dedicated Intel cores.

    So much cloud is just a stone simple app accessing a tiny sliver of data, scaled up to many users accessing many different slivers of data - things that one Pi could probably service many many active users simultaneously without a struggle. Instead of scaling by having a massively powerful chip that you slice up into thousands of services, the approach of having a modest unit that you replicate thousands of times gives you the ability to match capacity to need much more closely.

    Of course, the sales appeal is: I can't predict my need, it could be MASSIVE - to which the BIG IRON sellers reply "don't worry, we've got you covered." But a system of many little workers doing many little jobs can scale up too. Have you ever seen a sweatshop full of laborers? There's many good reasons businesses use that model, and CPUs don't need healthcare coverage.

    --
    🌻🌻 [google.com]
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3