Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday March 27 2014, @03:03AM   Printer-friendly
from the looking-forward-to-some-pretty-pictures dept.

pbnjoe writes:

Ars technica is reporting on new devlopments in GPU interconnect tech.

From the article:

Nvidia and IBM have developed an interconnect that will be integrated into future graphics processing units, letting GPUs and CPUs share data five times faster than they can now, Nvidia announced today. The fatter pipe will let data flow between the CPU and GPU at rates higher than 80GB per second, compared to 16GB per second today. NVLink, the interconnect, will be part of the newly announced Pascal GPU architecture on track for release in 2016.

GPUs have become increasingly common in supercomputing, serving as accelerators or "co-processors" to help CPUs get work done faster. In the most recent list of the world's fastest 500 supercomputers, 53 systems used co-processors and 38 of these used Nvidia chips. The second and sixth most powerful supercomputers used Nvidia chips alongside CPUs. Intel still dominates, providing processors for 82.4 percent of Top 500 systems.

"Today's GPUs are connected to x86-based CPUs through the PCI Express (PCIe) interface, which limits the GPU's ability to access the CPU memory system and is four- to five-times slower than typical CPU memory systems," Nvidia said. "PCIe is an even greater bottleneck between the GPU and IBM Power CPUs, which have more bandwidth than x86 CPUs. As the NVLink interface will match the bandwidth of typical CPU memory systems, it will enable GPUs to access CPU memory at its full bandwidth... Although future Nvidia GPUs will continue to support PCIe, NVLink technology will be used for connecting GPUs to NVLink-enabled CPUs as well as providing high-bandwidth connections directly between multiple GPUs.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2, Interesting) by Sebastopol on Thursday March 27 2014, @03:20AM

    by Sebastopol (2909) on Thursday March 27 2014, @03:20AM (#21900)

    That seems awfully slow, Intel's Xeon Phi's next installment is already claiming 500GB/s on die (to EDRAM) and 384GB/s to DDR4.

    http://www.extremetech.com/extreme/171678-intel-un veils-72-core-x86-knights-landing-cpu-for-exascale -supercomputing [extremetech.com]

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   2  
  • (Score: 3, Informative) by visaris on Thursday March 27 2014, @12:46PM

    by visaris (2041) on Thursday March 27 2014, @12:46PM (#22016) Journal
    "Only 80GB/s?"

    From the summary: "between the CPU and GPU".

    The connection between the EDRAM and the cores on the Xeon Phi is not CPU to GPU. The memory bandwidth available to a GPU core to/from GPU memory is much higher than 80GB/s. A better comparison would be to compare the PCIe3x16 bandwidth to nVidia's stated 80GB/s.