from the when-you-want-a-top-end-video-card-for-research dept.
Q2VKPT [is] an interesting graphics research project whose goal is to create the first entirely raytraced game with fully dynamic real-time lighting, based on the Quake II engine Q2PRO. Rasterization is used only for the 2D user interface (UI).
Q2VKPT is powered by the Vulkan API and now, with the release of the GeForce RTX graphics cards capable of accelerating ray tracing via hardware, it can get close to 60 frames per second at 1440p (2560×1440) resolution with the RTX 2080 Ti GPU according to project creator Christoph Schied.
The project consists of about 12K lines of code which completely replace the graphics code of Quake II. It's open source and can be freely downloaded via GitHub.
This is how path tracing + denoising (4m16s video) works.
Also at Phoronix.
Related: Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Related Stories
NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More
The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.
NVIDIA is stating that the fastest Turing parts can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.
The Turing architecture also carries over the tensor cores from Volta, and indeed these have even been enhanced over Volta. The tensor cores are an important aspect of multiple NVIDIA initiatives. Along with speeding up ray tracing itself, NVIDIA's other tool in their bag of tricks is to reduce the amount of rays required in a scene by using AI denoising to clean up an image, which is something the tensor cores excel at. Of course that's not the only feature tensor cores are for – NVIDIA's entire AI/neural networking empire is all but built on them – so while not a primary focus for the SIGGRAPH crowd, this also confirms that NVIDIA's most powerful neural networking hardware will be coming to a wider range of GPUs.
New to Turing is support for a wider range of precisions, and as such the potential for significant speedups in workloads that don't require high precisions. On top of Volta's FP16 precision mode, Turing's tensor cores also support INT8 and even INT4 precisions. These are 2x and 4x faster than FP16 respectively, and while NVIDIA's presentation doesn't dive too deep here, I would imagine they're doing something similar to the data packing they use for low-precision operations on the CUDA cores. And without going too deep ourselves here, while reducing the precision of a neural network has diminishing returns – by INT4 we're down to a total of just 16(!) values – there are certain models that really can get away with this very low level of precision. And as a result the lower precision modes, while not always useful, will undoubtedly make some users quite happy at the throughput, especially in inferencing tasks.
Also of note is the introduction of GDDR6 into some GPUs. The NVIDIA Quadro RTX 8000 will come with 24 GB of GDDR6 memory and a total memory bandwidth of 672 GB/s, which compares favorably to previous-generation GPUs featuring High Bandwidth Memory. Turing supports the recently announced VirtualLink. The video encoder block has been updated to include support for 8K H.265/HEVC encoding.
Ray-tracing combined with various (4m27s video) shortcuts (4m16s video) could be used for good-looking results in real time.
Also at Engadget, Notebookcheck, and The Verge.
See also: What is Ray Tracing and Why Do You Want it in Your GPU?
NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October
NVIDIA's Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA's GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA's new Turing GPU architecture and built on TSMC's 12nm "FFN" process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA's most important GPU architecture since 2006's Tesla GPU architecture (G80 GPU), and from a features standpoint it's clear that he's not overstating matters.
[...] So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA's RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.
The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.
NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.
Nvidia has confirmed that the machine learning capabilities (tensor cores) of the GPU will used to smooth out problems with ray-tracing. Real-time AI denoising (4m17s) will be used to reduce the amount of samples per pixel needed to achieve photorealism.
Previously: Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Related: Real-time Ray-tracing at GDC 2014
Crytek has showcased a new real-time raytracing demo which is said to run on most mainstream, contemporary GPUs from NVIDIA and AMD. The minds behind one of the most visually impressive FPS franchise, Crysis, have their new "Noir" demo out which was run on an AMD Radeon RX Vega graphics card which shows that raytracing is possible even without an NVIDIA RTX graphics card.
[...] Crytek states that the experimental ray tracing feature based on CRYENGINE's Total Illumination used to create the demo is both API and hardware agnostic, enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.
Related: Real-time Ray-tracing at GDC 2014
Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Q2VKPT: An Open Source Game Demo with Real-Time Path Tracing
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing
Nvidia Ditches the Ray-Tracing Cores with Lower-Priced GTX 1660 Ti
NVIDIA Releases DirectX Raytracing Driver for GTX Cards; Posts Trio of DXR Demos
Last month at GDC 2019, NVIDIA revealed that they would finally be enabling public support for DirectX Raytracing on non-RTX cards. Long baked into the DXR specification itself – which is designed [to] encourage ray tracing hardware development while also allowing it to be implemented via traditional compute shaders – the addition of DXR support in cards without hardware support for it is a small but important step in the deployment of the API and its underlying technology. At the time of their announcement, NVIDIA announced that this driver would be released in April, and now this morning, NVIDIA is releasing the new driver.
As we covered in last month's initial announcement of the driver, this has been something of a long time coming for NVIDIA. The initial development of DXR and the first DXR demos (including the Star Wars Reflections demo) were all handled on cards without hardware RT acceleration; in particular NVIDIA Volta-based video cards. Microsoft used their own fallback layer for a time, but for the public release it was going to be up to GPU manufacturers to provide support, including their own fallback layer. So we have been expecting the release of this driver in some form for quite some time.
Of course, the elephant in the room in enabling DXR on cards without RT hardware is what it will do for performance – or perhaps the lack thereof.
Also at Wccftech.
See also: NVIDIA shows how much ray-tracing sucks on older GPUs
[For] stuff that really adds realism, like advanced shadows, global illumination and ambient occlusion, the RTX 2080 Ti outperforms the 1080 Ti by up to a factor of six.
To cite some specific examples, Port Royal will run on the RTX 2080 Ti at 53.3 fps at 2,560 x 1,440 with advanced reflections and shadows, along with DLSS anti-aliasing, turned on. The GTX 1080, on the other hand, will run at just 9.2 fps with those features enabled and won't give you any DLSS at all. That effectively makes the feature useless on those cards for that game. With basic reflections on Battlefield V, on the other hand, you'll see 30 fps on the 1080 Ti compared to 68.3 on the 2080 Ti.
Previously:
Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Q2VKPT: An Open Source Game Demo with Real-Time Path Tracing
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing
Nvidia Ditches the Ray-Tracing Cores with Lower-Priced GTX 1660 Ti
Crytek Demos Real-Time Raytracing for AMD and Non-RTX Nvidia GPUs
(Score: 2) by ledow on Monday January 21 2019, @09:51AM (2 children)
And like all these things, since the days of Tenebrae Quake, etc.
I think I'd rather have the non-raytraced version, that runs at a decent speed on sensible hardware.
I compare it to the way that we don't have movie cameras that you can just take into a building and "film" without studio lights, metering, reflectors, etc.
Sure, it's "less realistic". You're filming people wearing make-up, under really bright lights, in a controlled set. But it makes a better movie than someone with even the most expensive "human-eye-like" camera just filming in a room made to be lit exactly as the script says (e.g. by a single candle or flaming torch or whatever).
The reason we put lights and things and falsify the lighting is that it makes both the film-maker's and the audience's life easier and experience better. The reason that GTA V consists of hundreds of hand-crafted shaders and multi-pass graphics (there's an article somewhere about it - the work that goes into packing information into spare pixel channels, and the shader-work outside the scope of simply drawing something on the screen is amazing), is that to ray-trace it would not only make it much more intensive on the audience (and a single dev taking a year to optimise a shader is "better" for you than everyone getting a game that runs like a stunned sloth), but it would make it much harder to actually make it look right (because you would have to replicate a real environment and all those explosions and bright lights wouldn't look how you would want them to do in a game), but also that it would then just look like "a real thing". Which is almost certainly not the effect you are after in a movie or video game, no matter what you might think. It would also take as much - if not more - effort to do so, than just licensing an engine or even adjusting/creating your own to do so.
Your artists would suddenly be just model-makers and have to include the right wattage of lighting all the way down the entire underground level or you're just in a black pit and can't see anything (remember Doom?). They'd have to colour everything accordingly and spend just as much time making it work and be realistic as they would have done just saying "Look, can we tint the floor here so the player doesn't notice but so they can read the vital plot "scrawl of blood" on the floor?".
Ray-tracing has been around forever. The early ray-tracing demos now just run on modern PC's. Not only are there lots of problems with them, but not one company has ever seriously made a ray-traced game. The effort involved gains them nothing. That Quake 2 demos looks just like a slightly-pretty Quake 2 to me. Though I can see what I'm supposed to notice, shadows, etc. what I don't see is anything that a shader couldn't do ten times faster even if it's not perfectly accurate.
(Score: 2) by pkrasimirov on Monday January 21 2019, @10:04AM
What's "realistic" is defined when the NN is trained. Once in grasp of the "reality", it can be made to an insane amount of HDR or pastel colors or just gamma or whatever.
(Score: 2) by takyon on Monday January 21 2019, @05:21PM
This is another tool in the toolbox. Some will use it to great effect, others will use it sloppily (see HDR [resetera.com]).
Artists are beginning to use machine learning to cut down on their workload. It could be applicable to the menial tasks you mention. Aspects such as level design should be even more important than visual design and will probably require the human touch, at least for now.
What is decent performance? 60 FPS? 90? 120? 240? At 4K resolution? 16K?
https://soylentnews.org/article.pl?sid=18/12/02/232213 [soylentnews.org]
https://www.darpa.mil/attachments/3DSoCProposersDay20170915.pdf [darpa.mil]
With new transistor types and 3D architectures, we could see such a huge performance increase that we will be able to do real-time raytracing, at 240 FPS, 16K resolution [soylentnews.org], in a 1 Watt chip used in a VR headset. (Of course, foveated rendering could reduce the burden for a headset chip considerably.) But long before then, this path tracing approach will work.
AMD was right to wait and see on real-time raytracing. However, while Nvidia's current hardware is overhyped early adopter stuff, they will be able to massively increase the performance in subsequent generations. Just the shrink to TSMC "7nm" alone will give them a lot of extra die space that could be used by dedicated raytracing/tensor cores.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by pkrasimirov on Monday January 21 2019, @09:58AM
Hehe, he applied neural-network simulation processing after the noisy picture output. Much like our brain "smoothens" the raw eye data for us. I expect new phenomena out of this, like "blind spot", "blue/gold dress" etc.
Also worth applying on real-world noisy videos such as night cam output.
(Score: 2) by The Mighty Buzzard on Monday January 21 2019, @04:23PM (5 children)
A Quake II port only managing 1440P@60Hz? And we want these beyond pathetic numbers, why?
My rights don't end where your fear begins.
(Score: 2) by takyon on Monday January 21 2019, @04:40PM (4 children)
Because real-time ray/path tracing is the future.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by The Mighty Buzzard on Monday January 21 2019, @05:44PM (3 children)
Okay, I just held some printer paper up to the screen and tried it on this picture [wikimedia.org]. I don't get the difficulty.
My rights don't end where your fear begins.
(Score: 2) by takyon on Monday January 21 2019, @05:49PM (2 children)
You have two working eyes. Ray and GPUs are blind.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by The Mighty Buzzard on Monday January 21 2019, @07:19PM (1 child)
Well the solution's obvious then, just use this one [nocookie.net] instead. Geez, you'd think they would have thought of this before.
My rights don't end where your fear begins.
(Score: 1) by khallow on Tuesday January 22 2019, @02:41AM
(Score: 0) by Anonymous Coward on Tuesday January 22 2019, @01:00AM
Is that another group of genders from the millennial gender studies lab?