Stories
Slash Boxes
Comments

SoylentNews is people

The Fine print: The following are owned by whoever posted them. We are not responsible for them in any way.

Journal by takyon

Milan - The Next Frontier? (22m28s)

Notes from SemiAccurate's CC with Susquehanna this morning

Various sources said things like "Milan will have 80 cores" or "Milan will have 15 chiplets".

The speculation, based on sources and other reasoning, is that the the 8-core chiplet will continue to be used going forward. They have great yields compared to bigger monolithic chips and AMD can simply make them smaller in size rather than boost core count of each to 10-12 cores. Zen 2 Epyc uses eight 8-core chiplets for up to 64 total cores, and a future version could use ten chiplets to get to 80 cores.

AMD and Cray will make a 1.5 exaflops supercomputer.

In fact while AMD has kept the details on the technology light, it sounds like this version of [Infinity Fabric] will be the most advanced version yet. AMD is specifically noting that it’s an “incredibly” coherent fabric, calling it the first fully optimized CPU + GPU design for supercomputing. AMD’s GPUs and CPUs will be arranged in a 4-to-1 ratio, with 4 GPUs for each EPYC CPU. It’s worth noting that AMD’s slide shows a mesh with every GPU connected to the CPU and two other GPUs, but I’m not reading too much into this quite yet, as AMD hasn’t disclosed any other details on the IF setup.

Design and Analysis of an APU for Exascale Computing

AMD may try to do something like create a server/HPC APU that consists of ten 8-core CPU chiplets, four GPU chiplets(?), and the I/O chiplet, with DRAM/HBM stacked on top of the I/O die which emits less heat.

If the GPU thing is a red herring but Milan does have 14 CPU chiplets + 1 I/O chiplet, that's a whopping 112 cores. Even if clock speeds regressed a bit, it could offer more multithreaded performance per dollar than predecessors.

Display Options Threshold/Breakthrough Reply to Comment Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Monday May 13 2019, @04:36AM (3 children)

    by Anonymous Coward on Monday May 13 2019, @04:36AM (#842844)

    You are forgetting that they also want to offer the ability to run up to four threads simultaneously on every core. But where is the ram going to come from to use this? Using 448 threads and 1 GB per thread is alread half a TB of RAM, and that would be a limitation you need to code around for most problems.

    Really it is hard to generalize but I'd say you probably want at least 4 GB per thread for tasks like I run. I've got a 2990wx w 128GB and am always coming up against this.

  • (Score: 2) by takyon on Monday May 13 2019, @11:20AM (2 children)

    by takyon (881) <{takyon} {at} {soylentnews.org}> on Monday May 13 2019, @11:20AM (#842953) Journal

    1 source in the video says it, but I don't believe it just yet.

    However, if you want 2 TB of RAM, you can do it with 8 of these:

    Samsung Shows Off 256 GB Server Memory Modules Using 16 Gb Chips [soylentnews.org]

    So what can you do with 256 GB memory modules? Intel’s upcoming Xeon Scalable “Cascade Lake” processors appear to support up to 3.84 TB of memory across all 12 DIMM slots, so by installing 12 256 GB RDIMMs, a dual-socket server could get 6 TB of memory. AMD’s existing EPYC processors officially support up to 128 GB LRDIMM memory modules and up to 2 TB of memory in total, which is logical as AMD has not yet validated 256 GB RDIMMs. If AMD finds 256 GB RDIMMs viable for its platform, it can support them by adjusting microcode of its existing EPYC processors, or just validating them with its upcoming 7nm EPYC "Rome" CPUs. In any case, 256 GB modules can enable up to 4 TB of memory per socket and up to 8 TB of RAM per 2P box in case of the AMD server platform.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]