Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday August 16 2018, @03:29AM   Printer-friendly
from the turing-machines dept.

NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest Turing parts can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

The Turing architecture also carries over the tensor cores from Volta, and indeed these have even been enhanced over Volta. The tensor cores are an important aspect of multiple NVIDIA initiatives. Along with speeding up ray tracing itself, NVIDIA's other tool in their bag of tricks is to reduce the amount of rays required in a scene by using AI denoising to clean up an image, which is something the tensor cores excel at. Of course that's not the only feature tensor cores are for – NVIDIA's entire AI/neural networking empire is all but built on them – so while not a primary focus for the SIGGRAPH crowd, this also confirms that NVIDIA's most powerful neural networking hardware will be coming to a wider range of GPUs.

New to Turing is support for a wider range of precisions, and as such the potential for significant speedups in workloads that don't require high precisions. On top of Volta's FP16 precision mode, Turing's tensor cores also support INT8 and even INT4 precisions. These are 2x and 4x faster than FP16 respectively, and while NVIDIA's presentation doesn't dive too deep here, I would imagine they're doing something similar to the data packing they use for low-precision operations on the CUDA cores. And without going too deep ourselves here, while reducing the precision of a neural network has diminishing returns – by INT4 we're down to a total of just 16(!) values – there are certain models that really can get away with this very low level of precision. And as a result the lower precision modes, while not always useful, will undoubtedly make some users quite happy at the throughput, especially in inferencing tasks.

Also of note is the introduction of GDDR6 into some GPUs. The NVIDIA Quadro RTX 8000 will come with 24 GB of GDDR6 memory and a total memory bandwidth of 672 GB/s, which compares favorably to previous-generation GPUs featuring High Bandwidth Memory. Turing supports the recently announced VirtualLink. The video encoder block has been updated to include support for 8K H.265/HEVC encoding.

Ray-tracing combined with various (4m27s video) shortcuts (4m16s video) could be used for good-looking results in real time.

Also at Engadget, Notebookcheck, and The Verge.

See also: What is Ray Tracing and Why Do You Want it in Your GPU?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Thursday August 16 2018, @03:33AM (6 children)

    by Anonymous Coward on Thursday August 16 2018, @03:33AM (#722041)

    In the old days, you'd buy a machine that could had features X, Y, and Z, along with a manual for how to make it do X, or Y, or Z. Then, you'd program it to do interesting things with those features.

    Yet, today, I can't figure out how the FUCK I can tell a modern machine what to do. I mean, you can't even set a Macintosh's cpu frequency, or control its fans should you have the need.

    GPUs are a whole 'nother beast; the only way to use them is to work within the awful confines of some poorly documented, proprietary API or language. How does anybody make these bloody things work for them?

  • (Score: 1, Informative) by Anonymous Coward on Thursday August 16 2018, @04:23AM (2 children)

    by Anonymous Coward on Thursday August 16 2018, @04:23AM (#722050)

    Obviously you've never worked with GPU programming because it isn't as mysterious or proprietary as you think. Programming for GPU's of almost any flavor is easily accomplished through DirectX, OpenGL, and Vulkan which all have support for compute shaders. The CUDA framework is also available if you are nvidia specific. Difficulty wise its not that difficult if you are used to programming already. I can see how this might be daunting to a novice programmer however.

    • (Score: 0) by Anonymous Coward on Thursday August 16 2018, @04:33AM (1 child)

      by Anonymous Coward on Thursday August 16 2018, @04:33AM (#722053)

      Get it yet?

      • (Score: 2) by Aiwendil on Thursday August 16 2018, @07:03AM

        by Aiwendil (531) on Thursday August 16 2018, @07:03AM (#722077) Journal

        You mean like the x86 instruction set?

  • (Score: 2) by shortscreen on Thursday August 16 2018, @07:16AM (1 child)

    by shortscreen (2252) on Thursday August 16 2018, @07:16AM (#722082) Journal

    Most x86 CPUs of recent decades have a set of MSRs to change the speed. You just need to find the docs that apply to your CPU.

    Controlling fans is more complicated (I believe it involves ACPI) but possible.

    On GPUs you have a point. You need the binary blob. And as for the APIs... I've only monkeyed around with OpenGL 1.x, because that's the only one I could make any sense of (well, glide looked easy but it's a bit obscure).

    • (Score: 0) by Anonymous Coward on Thursday August 16 2018, @08:51PM

      by Anonymous Coward on Thursday August 16 2018, @08:51PM (#722505)

      Chances are that the OP can't just write a program to control those things without also telling Mac OS X to back off; good luck figuring out how to do that in a timely manner, or without insider knowledge.

      He's right. People don't own their machines any more.

  • (Score: 0) by Anonymous Coward on Thursday August 16 2018, @10:03AM

    by Anonymous Coward on Thursday August 16 2018, @10:03AM (#722107)

    How does anybody make these bloody things work for them?

    Buy a game and play it?