Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by n1 on Thursday June 22 2017, @08:09AM   Printer-friendly
from the epyc-to-ynfynyty dept.

AMD has launched its Ryzen-based take on x86 server processors to compete with Intel's Xeon CPUs. All of the Epyc 7000-series CPUs support 128 PCIe 3.0 lanes and 8 channels (2 DIMMs per channel) of DDR4-2666 DRAM:

A few weeks ago AMD announced the naming of the new line of enterprise-class processors, called EPYC, and today marks the official launch with configurations up to 32 cores and 64 threads per processor. We also got an insight into several features of the design, including the AMD Infinity Fabric.

Today's announcement of the AMD EPYC product line sees the launch of the top four CPUs, focused primarily at dual socket systems. The full EPYC stack will contain twelve processors, with three for single socket environments, with the rest of the stack being made available at the end of July. It is worth taking a few minutes to look at how these processors look under the hood.

On the package are four silicon dies, each one containing the same 8-core silicon we saw in the AMD Ryzen processors. Each silicon die has two core complexes, each of four cores, and supports two memory channels, giving a total maximum of 32 cores and 8 memory channels on an EPYC processor. The dies are connected by AMD's newest interconnect, the Infinity Fabric, which plays a key role not only in die-to-die communication but also processor-to-processor communication and within AMD's new Vega graphics. AMD designed the Infinity Fabric to be modular and scalable in order to support large GPUs and CPUs in the roadmap going forward, and states that within a single package the fabric is overprovisioned to minimize any issues with non-NUMA aware software (more on this later).

With a total of 8 memory channels, and support for 2 DIMMs per channel, AMD is quoting a 2TB per socket maximum memory support, scaling up to 4TB per system in a dual processor system. Each CPU will support 128 PCIe 3.0 lanes, suitable for six GPUs with full bandwidth support (plus IO) or up to 32 NVMe drives for storage. All the PCIe lanes can be used for IO devices, such as SATA drives or network ports, or as Infinity Fabric connections to other devices. There are also 4 IO hubs per processor for additional storage support.

AMD's slides at Ars Technica.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by fyngyrz on Thursday June 22 2017, @06:03PM (4 children)

    by fyngyrz (6567) on Thursday June 22 2017, @06:03PM (#529605) Journal

    I write multi-threaded software. My SDR (Software Defined Radio) is currently using 33 threads during SSB (Single SideBand) reception and can use more if certain signal processing options are selected or when receiving wideband FM; my image processing software, on the other hand, determines how many cores there are, and then splits the images into bands (or in a few cases, regions) for processing so unless you've got a CPU with more cores than the image has scan lines, there are cases where they'd be tasked unless you tell the software to specifically limit how many cores it will use (which you can do.)

    The real problem for multi-core computing isn't how many cores: The problem is memory bandwidth. Eventually, you get no significant gains because "waiting on memory" dominates over "waiting on instruction cycles."

    Eventually, unless the memory bandwidth problem is solved, adding more cores is going to start looking like the "more megapixels" nonsense with the tiny phone sensors. Yeah, you'll have 'em... but they aren't doing you much, if any, good.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by bob_super on Thursday June 22 2017, @07:39PM (2 children)

    by bob_super (1357) on Thursday June 22 2017, @07:39PM (#529629)

    I've said multiple times that my compiles will take at least 8 cores (come on guys, push those tools to 16 now!), and swallow all the DDR bandwidth I can throw at them.
    Reasonably-priced 8-channel DDR4? I need a mop before someone slips on the drool puddle.

    • (Score: 0) by Anonymous Coward on Thursday June 22 2017, @07:46PM (1 child)

      by Anonymous Coward on Thursday June 22 2017, @07:46PM (#529633)

      RTFM [gnu.org] :-)

      • (Score: 2) by bob_super on Thursday June 22 2017, @08:14PM

        by bob_super (1357) on Thursday June 22 2017, @08:14PM (#529643)

        Sadly, there's no open version for my tools. Compiling HW is very specific.

  • (Score: 2) by takyon on Thursday June 22 2017, @10:41PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Thursday June 22 2017, @10:41PM (#529689) Journal

    AMD is rumored [wccftech.com] to be adding High Bandwidth Memory to certain CPU/APU models. Intel has added HBM to Xeon Phi which has over 70 cores. However, stacking memory onto the CPU can cause thermal problems and is expensive. It remains to be seen how widespread HBM will be with CPUs.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]