Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Tuesday August 21 2018, @07:45AM   Printer-friendly
from the so-is-it-fast? dept.

NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October

NVIDIA's Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA's GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA's new Turing GPU architecture and built on TSMC's 12nm "FFN" process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA's most important GPU architecture since 2006's Tesla GPU architecture (G80 GPU), and from a features standpoint it's clear that he's not overstating matters.

[...] So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA's RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

Nvidia has confirmed that the machine learning capabilities (tensor cores) of the GPU will used to smooth out problems with ray-tracing. Real-time AI denoising (4m17s) will be used to reduce the amount of samples per pixel needed to achieve photorealism.

Previously: Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations

Related: Real-time Ray-tracing at GDC 2014


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @02:04PM (1 child)

    by Anonymous Coward on Tuesday August 21 2018, @02:04PM (#724160)

    You can see the reflection of the door opening at 1m33s...it handles thin objects like leaves and sheets

    Personally it seems to me they tagged certain models and sources to trace and other not to while using volumetric lighting to fill up the scene a little and not leaving outside that door was over them not being able to reliably render sunlight realistically when those previous constraints were in place... But considering this is a recording of a tech demo, I'd withhold further judgment for now.

    As for the VRAM, consider it future proofing.

    But it's not a future in bound for the next 2-3 years for desktops and it's already borderline not-enough for those >1400ppi VR displays. Like, you'd need the next node for VR and the previous node for the next 3 years... And this series is stuck in between. No?

  • (Score: 2) by takyon on Tuesday August 21 2018, @02:20PM

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday August 21 2018, @02:20PM (#724174) Journal

    https://www.reddit.com/r/Vive/comments/4s88iy/how_necessary_is_vram_for_vr/ [reddit.com]

    8 GB is clearly enough for anyone at a 4K resolution. 11 GB can handle 8K resolution. Both of these should be sufficient for current VR headsets and upcoming ones.

    I don't think anybody is getting the 1443 PPI [uploadvr.com] LG/Google VR display anytime soon, as in within 3 years. That display was 120 Hz as well. VR is supposed to be good for dual GPU setups, right? 2 GPUs with 11 GB VRAM each should be enough.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]