Title | GPU Interconnect Created for Faster Supercomputing | |
Date | Thursday March 27 2014, @03:03AM | |
Author | janrinok | |
Topic | ||
from the looking-forward-to-some-pretty-pictures dept. |
pbnjoe writes:
Ars technica is reporting on new devlopments in GPU interconnect tech.
From the article:
Nvidia and IBM have developed an interconnect that will be integrated into future graphics processing units, letting GPUs and CPUs share data five times faster than they can now, Nvidia announced today. The fatter pipe will let data flow between the CPU and GPU at rates higher than 80GB per second, compared to 16GB per second today. NVLink, the interconnect, will be part of the newly announced Pascal GPU architecture on track for release in 2016.
GPUs have become increasingly common in supercomputing, serving as accelerators or "co-processors" to help CPUs get work done faster. In the most recent list of the world's fastest 500 supercomputers, 53 systems used co-processors and 38 of these used Nvidia chips. The second and sixth most powerful supercomputers used Nvidia chips alongside CPUs. Intel still dominates, providing processors for 82.4 percent of Top 500 systems.
"Today's GPUs are connected to x86-based CPUs through the PCI Express (PCIe) interface, which limits the GPU's ability to access the CPU memory system and is four- to five-times slower than typical CPU memory systems," Nvidia said. "PCIe is an even greater bottleneck between the GPU and IBM Power CPUs, which have more bandwidth than x86 CPUs. As the NVLink interface will match the bandwidth of typical CPU memory systems, it will enable GPUs to access CPU memory at its full bandwidth... Although future Nvidia GPUs will continue to support PCIe, NVLink technology will be used for connecting GPUs to NVLink-enabled CPUs as well as providing high-bandwidth connections directly between multiple GPUs.
Links |
printed from SoylentNews, GPU Interconnect Created for Faster Supercomputing on 2024-10-11 20:49:06