Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Tuesday August 21 2018, @07:45AM   Printer-friendly
from the so-is-it-fast? dept.

NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October

NVIDIA's Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA's GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA's new Turing GPU architecture and built on TSMC's 12nm "FFN" process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA's most important GPU architecture since 2006's Tesla GPU architecture (G80 GPU), and from a features standpoint it's clear that he's not overstating matters.

[...] So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA's RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

Nvidia has confirmed that the machine learning capabilities (tensor cores) of the GPU will used to smooth out problems with ray-tracing. Real-time AI denoising (4m17s) will be used to reduce the amount of samples per pixel needed to achieve photorealism.

Previously: Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations

Related: Real-time Ray-tracing at GDC 2014


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @10:59AM (7 children)

    by Anonymous Coward on Tuesday August 21 2018, @10:59AM (#724112)

    The demo I've seen ( https://www.youtube.com/watch?v=KJRZTkttgLw [youtube.com] ) doesn't show reflections from moving objects. Like, the rays are going through everything not part of the fixed environment so charterers wouldn't be able to hide in a shadow of a moveable object for instance.

    Well, I guess they'll combine traditional techniques to solve those problems... Still, looks like they're desperately trying to justify all that RAM and transistors with an half-finished product that isn't really useful for games. I guess it will be useful for workstations so that's something.

  • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @11:56AM

    by Anonymous Coward on Tuesday August 21 2018, @11:56AM (#724133)

    Well, I guess they'll combine traditional techniques to solve those problems... Still, looks like they're desperately trying to justify all that RAM and transistors with an half-finished product that isn't really useful for games. I guess it will be useful for workstations so that's something.

    Current games are not targeting these cards, all we'd expect from games is higher frame rates. The workstation market is huge, every major image and video app is using GPU processing. The RAM and additional transistors can make a massive difference in performance there.

  • (Score: 3, Informative) by takyon on Tuesday August 21 2018, @12:02PM (5 children)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday August 21 2018, @12:02PM (#724134) Journal

    You can see the reflection of the door opening at 1m33s. Other instances might be due to the brightness and where light sources are. i.e. You aren't actually supposed to see a reflection.

    Their presentation [anandtech.com] also highlights shadows. I don't think there's a problem here.

    I think one thing to look out for is whether it handles thin objects like leaves and sheets correctly (based on this [youtube.com], covering another Nvidia paper).

    I'm not sure how you arrived at the conclusion that these GPUs won't be useful for games. Leaks [wccftech.com] suggest that the RTX 2070 will be as fast/faster than the GTX 1080, which is pretty much how Nvidia has tried to segment things for a while.

    As for the VRAM, consider it future proofing. These tests from 2015 [archive.is] show that you can fill up 4-6 GB of VRAM at 4K resolution on some titles. 2-3 4K monitors or 8K resolution could use even more (they managed to get to 7.5-8.4 GB at 8K).

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by fyngyrz on Tuesday August 21 2018, @01:14PM (2 children)

      by fyngyrz (6567) on Tuesday August 21 2018, @01:14PM (#724148) Journal

      At the claimed 4 gRays rate, my question would be "how many objects, and of what types, is each ray processing" -- because there's quite a difference between tracing a ray through one triangle and an actual scene region with many of them (and perhaps other types of objects as well... spheres, planes, etc.)

      To date, most such claims have foundered upon actual scene complexity.

      I've written a couple of ray tracers from scratch. I know a few things about them.

      [note:] ...and of course I didn't read TFA. Haven't even had my coffee yet. :)

      • (Score: 2) by takyon on Tuesday August 21 2018, @02:24PM (1 child)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday August 21 2018, @02:24PM (#724177) Journal

        I see claimed 6-10 GRays/s rates for the three cards that are launching. Certainly less on the unannounced 2060 and 2050 cards, assuming those even have the "RT cores" (still not clear what those are and if they are actually dedicated).

        I'm pretty sure algorithms such as this one [wikipedia.org] are being used to help handle complexity in the scene.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by fyngyrz on Wednesday August 22 2018, @12:08AM

          by fyngyrz (6567) on Wednesday August 22 2018, @12:08AM (#724480) Journal

          I don't know where I got four from. Brain fart. I'm old. :)

    • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @02:04PM (1 child)

      by Anonymous Coward on Tuesday August 21 2018, @02:04PM (#724160)

      You can see the reflection of the door opening at 1m33s...it handles thin objects like leaves and sheets

      Personally it seems to me they tagged certain models and sources to trace and other not to while using volumetric lighting to fill up the scene a little and not leaving outside that door was over them not being able to reliably render sunlight realistically when those previous constraints were in place... But considering this is a recording of a tech demo, I'd withhold further judgment for now.

      As for the VRAM, consider it future proofing.

      But it's not a future in bound for the next 2-3 years for desktops and it's already borderline not-enough for those >1400ppi VR displays. Like, you'd need the next node for VR and the previous node for the next 3 years... And this series is stuck in between. No?