Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Tuesday August 21 2018, @07:45AM   Printer-friendly
from the so-is-it-fast? dept.

NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October

NVIDIA's Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA's GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA's new Turing GPU architecture and built on TSMC's 12nm "FFN" process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA's most important GPU architecture since 2006's Tesla GPU architecture (G80 GPU), and from a features standpoint it's clear that he's not overstating matters.

[...] So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA's RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

Nvidia has confirmed that the machine learning capabilities (tensor cores) of the GPU will used to smooth out problems with ray-tracing. Real-time AI denoising (4m17s) will be used to reduce the amount of samples per pixel needed to achieve photorealism.

Previously: Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations

Related: Real-time Ray-tracing at GDC 2014


Original Submission

Related Stories

Real-time Ray-tracing at GDC 2014 5 comments

"I've seen the announcement from Imagination technologies earlier in the week, but I wanted to take a look at their Ray-Tracing demos and presentations to get a better idea. I finally got an opportunity to do so, and I really liked what I saw. The latest PowerVR GPU architecture has a new Ray Tracing Unit (or RTU) that can trace the path of light rays from one surface to another in order to accelerate a number of graphics techniques going from shadowing, reflections to visibility algorithm and more." ( Hubert Nguyen, http://www.ubergizmo.com/2014/03/powervr-series-6- wizard-gpu-ray-tracing/ )

"Now that the dust has settled over GDC 2014, we have started collecting and analyzing the coverage from the launch of our ray tracing PowerVR Wizard GPU..."
( Alexandru Voica, http://blog.imgtec.com/news/launching-ray-tracing- powervr-wizard-gpus-gdc-2014 )

Microsoft Announces Directx 12 Raytracing API 7 comments

https://arstechnica.com/gadgets/2018/03/microsoft-announces-the-next-step-in-gaming-graphics-directx-raytracing/

At GDC, Microsoft announced a new feature for DirectX 12: DirectX Raytracing (DXR). The new API offers hardware-accelerated raytracing to DirectX applications, ushering in a new era of games with more realistic lighting, shadows, and materials. One day, this technology could enable the kinds of photorealistic imagery that we've become accustomed to in Hollywood blockbusters.

[...] Because of the performance demands, Microsoft expects that DXR will be used, at least for the time being, to fill in some of the things that raytracing does very well and that rasterization doesn't: things like reflections and shadows. DXR should make these things look more realistic. We might also see simple, stylized games using raytracing exclusively.

The company says that it has been working on DXR for close to a year, and Nvidia in particular has plenty to say about the matter. Nvidia has its own raytracing engine designed for its Volta architecture (though currently, the only video card shipping with Volta is the Titan V, so the application of this is likely limited). When run on a Volta system, DXR applications will automatically use that engine.

https://www.anandtech.com/show/12546/nvidia-unveils-rtx-technology-real-time-ray-tracing-acceleration-for-volta-gpus-and-later

In conjunction with Microsoft’s new DirectX Raytracing (DXR) API announcement, today NVIDIA is unveiling their RTX technology, providing ray tracing acceleration for Volta and later GPUs. Intended to enable real-time ray tracing for games and other applications, RTX is essentially NVIDIA's DXR backend implementation. For this NVIDIA is utilizing a mix of software and hardware – including new microarchitectural features – though the company is not disclosing further details.


Original Submission

Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations 8 comments

NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest Turing parts can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

The Turing architecture also carries over the tensor cores from Volta, and indeed these have even been enhanced over Volta. The tensor cores are an important aspect of multiple NVIDIA initiatives. Along with speeding up ray tracing itself, NVIDIA's other tool in their bag of tricks is to reduce the amount of rays required in a scene by using AI denoising to clean up an image, which is something the tensor cores excel at. Of course that's not the only feature tensor cores are for – NVIDIA's entire AI/neural networking empire is all but built on them – so while not a primary focus for the SIGGRAPH crowd, this also confirms that NVIDIA's most powerful neural networking hardware will be coming to a wider range of GPUs.

New to Turing is support for a wider range of precisions, and as such the potential for significant speedups in workloads that don't require high precisions. On top of Volta's FP16 precision mode, Turing's tensor cores also support INT8 and even INT4 precisions. These are 2x and 4x faster than FP16 respectively, and while NVIDIA's presentation doesn't dive too deep here, I would imagine they're doing something similar to the data packing they use for low-precision operations on the CUDA cores. And without going too deep ourselves here, while reducing the precision of a neural network has diminishing returns – by INT4 we're down to a total of just 16(!) values – there are certain models that really can get away with this very low level of precision. And as a result the lower precision modes, while not always useful, will undoubtedly make some users quite happy at the throughput, especially in inferencing tasks.

Also of note is the introduction of GDDR6 into some GPUs. The NVIDIA Quadro RTX 8000 will come with 24 GB of GDDR6 memory and a total memory bandwidth of 672 GB/s, which compares favorably to previous-generation GPUs featuring High Bandwidth Memory. Turing supports the recently announced VirtualLink. The video encoder block has been updated to include support for 8K H.265/HEVC encoding.

Ray-tracing combined with various (4m27s video) shortcuts (4m16s video) could be used for good-looking results in real time.

Also at Engadget, Notebookcheck, and The Verge.

See also: What is Ray Tracing and Why Do You Want it in Your GPU?


Original Submission

10 Reasons Linux Gamers Might Want To Pass On The NVIDIA RTX 20 Series 32 comments

Submitted via IRC for takyon

Continuing on from the NVIDIA GeForce RTX 2080 expectations on Linux shared earlier this week, here's a list of ten reasons why Linux gamers might want to pass on these soon-to-launch graphics cards from NVIDIA.

The list are various reasons you may want to think twice on these graphics cards -- at least not for pre-ordering any of them right away. Not all of them are specific to the Turing GPUs per se but also some NVIDIA Linux infrastructure problems or general Linux gaming challenges, but here's the list for those curious. And, yes, a list is coming out soon with reasons Linux users may want to consider the RTX 20 series -- well, mostly for developers / content creators it may make sense.

Here is the list:

  1. Lack of open-source driver support
  2. It will be a while before seeing RTX/ray-tracing Linux games
  3. Turing appears to be a fairly incremental upgrade outside of RTX
  4. The GeForce GTX 1080 series already runs very well
  5. Poor Wayland support
  6. The Linux driver support for Turing is unclear
  7. These graphics cards are incredibly expensive
  8. SLI is next to worthless on Linux
  9. VR Linux support is still in rough shape
  10. Pascal prices will almost surely drop

That's the quick list outside of my detailed pre-launch Linux analysis. A similar list of the pros for the RTX 20 series on Linux will be coming out shortly. It will certainly be interesting to see after 20 September how the NVIDIA GeForce RTX 20 series works on Linux.

Source: https://www.phoronix.com/scan.php?page=news_item&px=10-Reasons-Pass-RTX-20-Linux

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance


Original Submission

Nvidia's Turing GPU Pricing and Performance "Poorly Received" 20 comments

Nvidia's Turing pricing strategy has been 'poorly received,' says Instinet

Instinet analyst Romit Shah commented Friday on Nvidia Corp.'s new Turing GPU, now that reviews of the product are out. "The 2080 TI is indisputably the best consumer GPU technology available, but at a prohibitive cost for many gamers," he wrote. "Ray tracing and DLSS [deep learning super sampling], while apparently compelling features, are today just 'call options' for when game developers create content that this technology can support."

Nvidia shares fall after Morgan Stanley says the performance of its new gaming card is disappointing

"As review embargos broke for the new gaming products, performance improvements in older games is not the leap we had initially hoped for," Morgan Stanley analyst Joseph Moore said in a note to clients on Thursday. "Performance boost on older games that do not incorporate advanced features is somewhat below our initial expectations, and review recommendations are mixed given higher price points." Nvidia shares closed down 2.1 percent Thursday.

Moore noted that Nvidia's new RTX 2080 card performed only 3 percent better than the previous generation's 1080Ti card at 4K resolutions.

Nvidia Announces Titan RTX 14 comments

Nvidia has announced its $2,500 Turing-based Titan RTX GPU. It is said to have a single precision performance of 16.3 teraflops and "tensor performance" of 130 teraflops. Double precision performance has been neutered down to 0.51 teraflops, down from 6.9 teraflops for last year's Volta-based Titan V.

The card includes 24 gigabytes of GDDR6 VRAM clocked at 14 Gbps, for a total memory bandwidth of 672 GB/s.

Drilling a bit deeper, there are really three legs to Titan RTX that sets it apart from NVIDIA's other cards, particularly the GeForce RTX 2080 Ti. Raw performance is certainly once of those; we're looking at about 15% better performance in shading, texturing, and compute, and around a 9% bump in memory bandwidth and pixel throughput.

However arguably the lynchpin to NVIDIA's true desired market of data scientists and other compute users is the tensor cores. Present on all NVIDIA's Turing cards and the heart and soul of NVIIDA's success in the AI/neural networking field, NVIDIA gave the GeForce cards a singular limitation that is none the less very important to the professional market. In their highest-precision FP16 mode, Turing is capable of accumulating at FP32 for greater precision; however on the GeForce cards this operation is limited to half-speed throughput. This limitation has been removed for the Titan RTX, and as a result it's capable of full-speed FP32 accumulation throughput on its tensor cores.

Given that NVIDIA's tensor cores have nearly a dozen modes, this may seem like an odd distinction to make between the GeForce and the Titan. However for data scientists it's quite important; FP32 accumulate is frequently necessary for neural network training – FP16 accumulate doesn't have enough precision – especially in the big money fields that will shell out for cards like the Titan and the Tesla. So this small change is a big part of the value proposition to data scientists, as NVIDIA does not offer a cheaper card with the chart-topping 130 TFLOPS of tensor performance that Titan RTX can hit.

Previously: More Extreme in Every Way: The New Titan Is Here – NVIDIA TITAN Xp
Nvidia Announces Titan V
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Nvidia's Turing GPU Pricing and Performance "Poorly Received"


Original Submission

Nvidia Announces RTX 2060 GPU 10 comments

The NVIDIA GeForce RTX 2060 6GB Founders Edition Review: Not Quite Mainstream

In the closing months of 2018, NVIDIA finally released the long-awaited successor to the Pascal-based GeForce GTX 10 series: the GeForce RTX 20 series of video cards. Built on their new Turing architecture, these GPUs were the biggest update to NVIDIA's GPU architecture in at least half a decade, leaving almost no part of NVIDIA's architecture untouched.

So far we've looked at the GeForce RTX 2080 Ti, RTX 2080, and RTX 2070 – and along with the highlights of Turing, we've seen that the GeForce RTX 20 series is designed on a hardware and software level to enable realtime raytracing and other new specialized features for games. While the RTX 2070 is traditionally the value-oriented enthusiast offering, NVIDIA's higher price tags this time around meant that even this part was $500 and not especially value-oriented. Instead, it would seem that the role of the enthusiast value offering is going to fall to the next member in line of the GeForce RTX 20 family. And that part is coming next week.

Launching next Tuesday, January 15th is the 4th member of the GeForce RTX family: the GeForce RTX 2060 (6GB). Based on a cut-down version of the same TU106 GPU that's in the RTX 2070, this new part shaves off some of RTX 2070's performance, but also a good deal of its price tag in the process

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance


Original Submission

AMD Announces Radeon VII GPU, Teases Third-Generation Ryzen CPU 15 comments

At AMD's CES 2019 keynote, CEO Lisa Su revealed the Radeon VII, a $700 GPU built on TSMC's "7nm" process. The GPU should have around the same performance and price as Nvidia's already-released RTX 2080. While it does not have any dedicated ray-tracing capabilities, it includes 16 GB of High Bandwidth Memory.

Nvidia's CEO has trashed his competitor's new GPU, calling it "underwhelming" and "lousy". Meanwhile, Nvidia has announced that it will support Adaptive Sync, the standardized version of AMD's FreeSync dynamic refresh rate and anti-screen tearing technology. Lisa Su also says that AMD is working on supporting ray tracing in future GPUs, but that the ecosystem is not ready yet.

Su also showed off a third-generation Ryzen CPU at the CES keynote, but did not announce a release date or lineup details. Like the second generation of Epyc server CPUs, the new Ryzen CPUs will be primarily built on TSMC's "7nm" process, but will include a "14nm" GlobalFoundries I/O part that includes the memory controllers and PCIe lanes. The CPUs will support PCIe 4.0.

The Ryzen 3000-series ("Matisse") should provide a roughly 15% single-threaded performance increase while significantly lowering power consumption. However, it has been speculated that the chips could include up to 16 cores or 8 cores with a separate graphics chiplet. AMD has denied that there will be a variant with integrated graphics, but Lisa Su has left the door open for 12- or 16-core versions of Ryzen, saying that "There is some extra room on that package, and I think you might expect we'll have more than eight cores". Here's "that package".

Also at The Verge.

Previously: Watch AMD's CES 2019 Keynote Live: 9am PT/12pm ET/5pm UK


Original Submission

Q2VKPT: An Open Source Game Demo with Real-Time Path Tracing 11 comments

Q2VKPT Is the First Entirely Raytraced Game with Fully Dynamic Real-Time Lighting, Runs 1440P@60FPS with RTX 2080Ti via Vulkan API

Q2VKPT [is] an interesting graphics research project whose goal is to create the first entirely raytraced game with fully dynamic real-time lighting, based on the Quake II engine Q2PRO. Rasterization is used only for the 2D user interface (UI).

Q2VKPT is powered by the Vulkan API and now, with the release of the GeForce RTX graphics cards capable of accelerating ray tracing via hardware, it can get close to 60 frames per second at 1440p (2560×1440) resolution with the RTX 2080 Ti GPU according to project creator Christoph Schied.

The project consists of about 12K lines of code which completely replace the graphics code of Quake II. It's open source and can be freely downloaded via GitHub.

This is how path tracing + denoising (4m16s video) works.

Also at Phoronix.

Related: Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance


Original Submission

AMD and Nvidia's Latest GPUs Are Expensive and Unappealing 25 comments

AMD, Nvidia Have Launched the Least-Appealing GPU Upgrades in History

Yesterday, AMD launched the Radeon VII, the first 7nm GPU. The card is intended to compete with Nvidia's RTX family of Turing-class GPUs, and it does, broadly matching the RTX 2080. It also matches the RTX 2080 on price, at $700. Because this card began life as a professional GPU intended for scientific computing and AI/ML workloads, it's unlikely that we'll see lower-end variants. That section of AMD's product stack will be filled by 7nm Navi, which arrives later this year.

Navi will be AMD's first new 7nm GPU architecture and will offer a chance to hit 'reset' on what has been, to date, the least compelling suite of GPU launches AMD and Nvidia have ever collectively kicked out the door. Nvidia has relentlessly moved its stack pricing higher while holding performance per dollar mostly constant. With the RTX 2060 and GTX 1070 Ti fairly evenly matched across a wide suite of games, the question of whether the RTX 2060 is better priced largely hinges on whether you stick to formal launch pricing for both cards or check historical data for actual price shifts.

Such comparisons are increasingly incidental, given that Pascal GPU prices are rising and cards are getting harder to find, but they aren't meaningless for people who either bought a Pascal GPU already or are willing to consider a used card. If you're an Nvidia fan already sitting on top of a high-end Pascal card, Turing doesn't offer you a great deal of performance improvement.

AMD has not covered itself in glory, either. The Radeon VII is, at least, unreservedly faster than the Vega 64. There's no equivalent last-generation GPU in AMD's stack to match it. But it also duplicates the Vega 64's overall power and noise profile, limiting the overall appeal, and it matches the RTX 2080's bad price. A 1.75x increase in price for a 1.32x increase in 4K performance isn't a great ratio even by the standards of ultra-high-end GPUs, where performance typically comes with a price penalty.

Rumors and leaks have suggested that Nvidia will release a Turing-based GPU called the GTX 1660 Ti (which has also been referred to as "1160"), with a lower price but missing the dedicated ray-tracing cores of the RTX 2000-series. AMD is expected to release "7nm" Navi GPUs sometime during 2019.

Radeon VII launch coverage also at AnandTech, Tom's Hardware.

Related: AMD Returns to the Datacenter, Set to Launch "7nm" Radeon Instinct GPUs for Machine Learning in 2018
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
AMD Announces "7nm" Vega GPUs for the Enterprise Market
Nvidia Announces RTX 2060 GPU
AMD Announces Radeon VII GPU, Teases Third-Generation Ryzen CPU
AMD Responds to Radeon VII Short Supply Rumors


Original Submission

Nvidia Ditches the Ray-Tracing Cores with Lower-Priced GTX 1660 Ti 6 comments

The NVIDIA GeForce GTX 1660 Ti Review, Feat. EVGA XC GAMING: Turing Sheds RTX for the Mainstream Market

When NVIDIA put their plans for their consumer Turing video cards into motion, the company bet big, and in more ways than one. In the first sense, NVIDIA dedicated whole logical blocks to brand-new graphics and compute features – ray tracing and tensor core compute – and they would need to sell developers and consumers alike on the value of these features, something that is no easy task. In the second sense however, NVIDIA also bet big on GPU die size: these new features would take up a lot of space on the 12nm FinFET process they'd be using.

The end result is that all of the Turing chips we've seen thus far, from TU102 to TU106, are monsters in size; even TU106 is 445mm2, never mind the flagship TU102. And while the full economic consequences that go with that decision are NVIDIA's to bear, for the first year or so of Turing's life, all of that die space that is driving up NVIDIA's costs isn't going to contribute to improving NVIDIA's performance in traditional games; it's a value-added feature. Which is all workable for NVIDIA in the high-end market where they are unchallenged and can essentially dictate video card prices, but it's another matter entirely once you start approaching the mid-range, where the AMD competition is alive and well.

Consequently, in preparing for their cheaper, sub-$300 Turing cards, NVIDIA had to make a decision: do they keep the RT and tensor cores in order to offer these features across the line – at a literal cost to both consumers and NVIDIA – or do they drop these features in order to make a leaner, more competitive chip? As it turns out, NVIDIA has opted for the latter, producing a new Turing GPU that is leaner and meaner than anything that's come before it, but also very different from its predecessors for this reason.

That GPU is TU116, and it's part of what will undoubtedly become a new sub-family of Turing GPUs for NVIDIA as the company starts rolling out Turing into the lower half of the video card market. Kicking things off in turn for this new GPU is NVIDIA's latest video card, the GeForce GTX 1660 Ti. Launching today at $279, it's destined to replace NVIDIA's GTX 1060 6GB in the market and is NVIDIA's new challenger for the mainstream video card market.

Compared to the RTX 2060 Founders Edition, GTX 1660 Ti has fewer CUDA[*] cores, lower memory clock, and the same amount of VRAM (6 GB), but it has higher core/boost clocks and lower TDP (120 W vs. 160 W). GTX 1660 Ti has roughly 85% the performance of the RTX 2060, at 80% the MSRP ($279 vs. $349).

Crytek Demos Real-Time Raytracing for AMD and Non-RTX Nvidia GPUs 5 comments

Crytek Demos Noir, a CRYENGINE Based Real-Time Raytracing Demo on AMD Radeon RX Vega 56 – Can Run on Most Mainstream, Contemporary AMD and NVIDIA GPUs

Crytek has showcased a new real-time raytracing demo which is said to run on most mainstream, contemporary GPUs from NVIDIA and AMD. The minds behind one of the most visually impressive FPS franchise, Crysis, have their new "Noir" demo out which was run on an AMD Radeon RX Vega graphics card which shows that raytracing is possible even without an NVIDIA RTX graphics card.

[...] Crytek states that the experimental ray tracing feature based on CRYENGINE's Total Illumination used to create the demo is both API and hardware agnostic, enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.

Related: Real-time Ray-tracing at GDC 2014
Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Q2VKPT: An Open Source Game Demo with Real-Time Path Tracing
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing
Nvidia Ditches the Ray-Tracing Cores with Lower-Priced GTX 1660 Ti


Original Submission

Nvidia Enables Support for DirectX Raytracing on Non-RTX GPUs, Results Lackluster 11 comments

NVIDIA Releases DirectX Raytracing Driver for GTX Cards; Posts Trio of DXR Demos

Last month at GDC 2019, NVIDIA revealed that they would finally be enabling public support for DirectX Raytracing on non-RTX cards. Long baked into the DXR specification itself – which is designed [to] encourage ray tracing hardware development while also allowing it to be implemented via traditional compute shaders – the addition of DXR support in cards without hardware support for it is a small but important step in the deployment of the API and its underlying technology. At the time of their announcement, NVIDIA announced that this driver would be released in April, and now this morning, NVIDIA is releasing the new driver.

As we covered in last month's initial announcement of the driver, this has been something of a long time coming for NVIDIA. The initial development of DXR and the first DXR demos (including the Star Wars Reflections demo) were all handled on cards without hardware RT acceleration; in particular NVIDIA Volta-based video cards. Microsoft used their own fallback layer for a time, but for the public release it was going to be up to GPU manufacturers to provide support, including their own fallback layer. So we have been expecting the release of this driver in some form for quite some time.

Of course, the elephant in the room in enabling DXR on cards without RT hardware is what it will do for performance – or perhaps the lack thereof.

Also at Wccftech.

See also: NVIDIA shows how much ray-tracing sucks on older GPUs

[For] stuff that really adds realism, like advanced shadows, global illumination and ambient occlusion, the RTX 2080 Ti outperforms the 1080 Ti by up to a factor of six.

To cite some specific examples, Port Royal will run on the RTX 2080 Ti at 53.3 fps at 2,560 x 1,440 with advanced reflections and shadows, along with DLSS anti-aliasing, turned on. The GTX 1080, on the other hand, will run at just 9.2 fps with those features enabled and won't give you any DLSS at all. That effectively makes the feature useless on those cards for that game. With basic reflections on Battlefield V, on the other hand, you'll see 30 fps on the 1080 Ti compared to 68.3 on the 2080 Ti.

Previously:


Original Submission

Nvidia Refreshes RTX 2000-Series GPUs With "Super" Branding 9 comments

The GeForce RTX 2070 Super & RTX 2060 Super Review: Smaller Numbers, Bigger Performance

NVIDIA is launching a mid-generation kicker for their mid-to-high-end video card lineup in the form of their GeForce RTX 20 series Super cards. Based on the same family of Turing GPUs as the original GeForce RTX 20 series cards, these new Super cards – all suffixed Super, appropriately enough – come with new configurations and new clockspeeds. They are, essentially, NVIDIA's 2019 card family for the $399+ video card market.

When they are released on July 9th, the GeForce RTX 20 series Super cards are going to be sharing store shelves with the rest of the GeForce RTX 20 series cards. Some cards like the RTX 2080 and RTX 2070 are set to go away, while other cards like the RTX 2080 Ti and RTX 2060 will remain on the market as-is. In practice, it's probably best to think of the new cards as NVIDIA executing as either a price cut or a spec bump – depending on if you see the glass as half-empty or half-full – all without meaningfully changing their price tiers.

In terms of performance, the RTX 2060 and RTX 2070 Super cards aren't going to bring anything new to the table. In fact if we're being blunt, the RTX 2070 Super is basically a slightly slower RTX 2080, and the RTX 2060 Super may as well be the RTX 2070. So instead, what has changed is the price that these performance levels are available at, and ultimately the performance-per-dollar ratios in parts of NVIDIA's lineup. The performance of NVIDIA's former $699 and $499 cards will now be available for $499 and $399, respectively. This leaves the vanilla RTX 2060 to hold the line at $349, and the upcoming RTX 2080 Super to fill the $699 spot. Which means if you're in the $400-$700 market for video cards, your options are about to get noticeably faster.

Also at Tom's Hardware, The Verge, and Ars Technica.

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Nvidia Announces RTX 2060 GPU
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing

Related: AMD and Intel at Computex 2019: First Ryzen 3000-Series CPUs and Navi GPU Announced
AMD Details Three Navi GPUs and First Mainstream 16-Core CPU


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @09:52AM (1 child)

    by Anonymous Coward on Tuesday August 21 2018, @09:52AM (#724091)

    ... that Egyptians mastered mummification long before pharaohs?

    • (Score: 2, Informative) by Anonymous Coward on Tuesday August 21 2018, @10:29AM

      by Anonymous Coward on Tuesday August 21 2018, @10:29AM (#724101)

      Yes, but the Nvidia GeForce RTX 20 mummifies them 25 times faster than ever before.

  • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @10:32AM (7 children)

    by Anonymous Coward on Tuesday August 21 2018, @10:32AM (#724102)

    TFA has no mention of how these new graphics cards can be used to mine crypto currencies. Isn't that what graphic cards are primarily used for these days?

    • (Score: 4, Informative) by takyon on Tuesday August 21 2018, @10:47AM (6 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday August 21 2018, @10:47AM (#724107) Journal

      Nope, it's over:

      Cryptocurrency miners' demand for Nvidia computer chips evaporates [latimes.com]

      Nvidia Corp.’s nine-month crypto gold rush is over.

      Sales of graphics chips to miners of cryptocurrencies such as ethereum dried up faster than expected, the Santa Clara company said. For a second quarter in a row, investors ignored Nvidia’s growth in its main markets and ditched the stock.

      Nvidia’s stock fell 4.9% to $244.82 a share Friday.

      “Our core platforms exceeded our expectations, even as crypto largely disappeared,” founder and Chief Executive Jensen Huang said Thursday on a conference call. “We’re projecting no cryptomining going forward.”

      [...] Nvidia said it had expected about $100 million in sales of chips bought by currency miners in the fiscal second quarter. Instead, the total was $18 million in the period, and that revenue is likely to disappear entirely in future quarters, the company said.

      Investors are expressing their concern at the sudden collapse of what had looked like a billion-dollar business. Three months ago, Nvidia said it generated $289 million in sales from cryptocurrency miners, but warned that demand was declining rapidly and might fall by as much as two-thirds. Even that prediction was too optimistic.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @11:02AM (4 children)

        by Anonymous Coward on Tuesday August 21 2018, @11:02AM (#724113)

        And yet the card is priced and specs for crypto mining & HPC.

        • (Score: 2) by takyon on Tuesday August 21 2018, @11:38AM (3 children)

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday August 21 2018, @11:38AM (#724128) Journal

          HPC != cryptomining, and they're priced high because Nvidia effectively has no real competition from AMD right now [extremetech.com]. They also want to make sure they get rid of existing GeForce 10 inventory. The prices will get cut in a few months or when AMD launches new GPUs, as is typical.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by EvilSS on Tuesday August 21 2018, @07:25PM (2 children)

            by EvilSS (1456) Subscriber Badge on Tuesday August 21 2018, @07:25PM (#724325)
            AMD won't have anything new out until probably near the end of Q1 or beginning of Q2 next year. Even then, I'd bet what they release won't be able to compete with the high-end NV cards.
            • (Score: 2) by bob_super on Tuesday August 21 2018, @08:02PM (1 child)

              by bob_super (1357) on Tuesday August 21 2018, @08:02PM (#724334)

              > Even then, I'd bet what they release won't be able to compete with the high-end NV cards.

              That's what Intel said ...
              *crosses fingers*

              • (Score: 2) by takyon on Tuesday August 21 2018, @10:27PM

                by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday August 21 2018, @10:27PM (#724421) Journal

                https://wccftech.com/exclusive-amd-navi-gpu-roadmap-cost-zen/ [wccftech.com]

                AMD diverted resources from Vega to Ryzen. They are such a small scale company that they couldn't seem to do both properly. However, the cryptomining boom created such demand for GPUs that they managed to do OK with Vega. The head of AMD's graphics division Raja Koduri left because of this and other issues, and is now at Intel helping them get back into the discrete graphics market with a 2020 GPU release.

                AMD
                Revenue: $5.33 billion (2017)
                Net income: $43 million (2017)

                Nvidia
                Revenue: $9.714 billion (2017)
                Net income: $3.047 billion (2017)

                Intel
                Revenue: $62.76 billion (2017)
                Net income: $9.601 billion (2017)

                It's AMDavid vs. 2 Goliaths. In order to damage one, AMD had to ignore the other. Soon, AMD may be challenged on discrete GPUs by both Intel and Nvidia, while also being challenged by Intel on integrated graphics and x86 CPUs. Intel did use AMD integrated graphics on some recent chips, but that could be a temporary ploy to combat laptops with Nvidia discrete cards.

                --
                [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by c0lo on Tuesday August 21 2018, @03:00PM

        by c0lo (156) on Tuesday August 21 2018, @03:00PM (#724193) Journal

        Oh, thanks God for that.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0
  • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @10:59AM (7 children)

    by Anonymous Coward on Tuesday August 21 2018, @10:59AM (#724112)

    The demo I've seen ( https://www.youtube.com/watch?v=KJRZTkttgLw [youtube.com] ) doesn't show reflections from moving objects. Like, the rays are going through everything not part of the fixed environment so charterers wouldn't be able to hide in a shadow of a moveable object for instance.

    Well, I guess they'll combine traditional techniques to solve those problems... Still, looks like they're desperately trying to justify all that RAM and transistors with an half-finished product that isn't really useful for games. I guess it will be useful for workstations so that's something.

    • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @11:56AM

      by Anonymous Coward on Tuesday August 21 2018, @11:56AM (#724133)

      Well, I guess they'll combine traditional techniques to solve those problems... Still, looks like they're desperately trying to justify all that RAM and transistors with an half-finished product that isn't really useful for games. I guess it will be useful for workstations so that's something.

      Current games are not targeting these cards, all we'd expect from games is higher frame rates. The workstation market is huge, every major image and video app is using GPU processing. The RAM and additional transistors can make a massive difference in performance there.

    • (Score: 3, Informative) by takyon on Tuesday August 21 2018, @12:02PM (5 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday August 21 2018, @12:02PM (#724134) Journal

      You can see the reflection of the door opening at 1m33s. Other instances might be due to the brightness and where light sources are. i.e. You aren't actually supposed to see a reflection.

      Their presentation [anandtech.com] also highlights shadows. I don't think there's a problem here.

      I think one thing to look out for is whether it handles thin objects like leaves and sheets correctly (based on this [youtube.com], covering another Nvidia paper).

      I'm not sure how you arrived at the conclusion that these GPUs won't be useful for games. Leaks [wccftech.com] suggest that the RTX 2070 will be as fast/faster than the GTX 1080, which is pretty much how Nvidia has tried to segment things for a while.

      As for the VRAM, consider it future proofing. These tests from 2015 [archive.is] show that you can fill up 4-6 GB of VRAM at 4K resolution on some titles. 2-3 4K monitors or 8K resolution could use even more (they managed to get to 7.5-8.4 GB at 8K).

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by fyngyrz on Tuesday August 21 2018, @01:14PM (2 children)

        by fyngyrz (6567) on Tuesday August 21 2018, @01:14PM (#724148) Journal

        At the claimed 4 gRays rate, my question would be "how many objects, and of what types, is each ray processing" -- because there's quite a difference between tracing a ray through one triangle and an actual scene region with many of them (and perhaps other types of objects as well... spheres, planes, etc.)

        To date, most such claims have foundered upon actual scene complexity.

        I've written a couple of ray tracers from scratch. I know a few things about them.

        [note:] ...and of course I didn't read TFA. Haven't even had my coffee yet. :)

        • (Score: 2) by takyon on Tuesday August 21 2018, @02:24PM (1 child)

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Tuesday August 21 2018, @02:24PM (#724177) Journal

          I see claimed 6-10 GRays/s rates for the three cards that are launching. Certainly less on the unannounced 2060 and 2050 cards, assuming those even have the "RT cores" (still not clear what those are and if they are actually dedicated).

          I'm pretty sure algorithms such as this one [wikipedia.org] are being used to help handle complexity in the scene.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by fyngyrz on Wednesday August 22 2018, @12:08AM

            by fyngyrz (6567) on Wednesday August 22 2018, @12:08AM (#724480) Journal

            I don't know where I got four from. Brain fart. I'm old. :)

      • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @02:04PM (1 child)

        by Anonymous Coward on Tuesday August 21 2018, @02:04PM (#724160)

        You can see the reflection of the door opening at 1m33s...it handles thin objects like leaves and sheets

        Personally it seems to me they tagged certain models and sources to trace and other not to while using volumetric lighting to fill up the scene a little and not leaving outside that door was over them not being able to reliably render sunlight realistically when those previous constraints were in place... But considering this is a recording of a tech demo, I'd withhold further judgment for now.

        As for the VRAM, consider it future proofing.

        But it's not a future in bound for the next 2-3 years for desktops and it's already borderline not-enough for those >1400ppi VR displays. Like, you'd need the next node for VR and the previous node for the next 3 years... And this series is stuck in between. No?

  • (Score: 4, Interesting) by MichaelDavidCrawford on Tuesday August 21 2018, @12:45PM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Tuesday August 21 2018, @12:45PM (#724142) Homepage Journal

    Also complex reflectors such as the Ritchey-Chrétien [starizona.com] of which the Hubble is an example. Most big scopes other than the Palomar 200" are Ritchey-Chrétiens.

    The main advantage of the Ritchey-Chrétian are a long focal length - high magnification - with a shorter tube than is possible with the Cassegrain [starizona.com], as well as a much wider field of view. The disadvantage of the Ritchey-Chrétien is that it can _only_ be used with that one long focal length, whereas with the Cassegrain one can remove the Hyperboloid secondary mirror then have a flat focal plane for a much shorter focal length but with a poor field of view, or possibly with a 45-degree mirror for a Newtonian [starizona.com], which was the most common type of commercially available scope until the early eighties.

    The very _first_ C program I ever attempted to write was for lens ray-tracing.

    These kinds of programs generally include automatic design optimization. In principle you can give it the Schott Optical Glass catalog, tell it what you want in the way of a focal length, field of view and how many lenses you want - a scope for eyeball astronomy or a spotting scope has two elements, a biconvex Crown Glass lens and a concave/flat Flint Glass lens for reasonably good correction of Chromatic Aberration whereas the Cook Triplet [briansmith.com] is commonly used for wide field of view astrophotography.

    As you might expect I didn't do much ahead of time design on that first ray tracer however I did think about it a lot starting when I was thirteen. I took Physics 20 Computational Physics then on a 16-Bit 8086 IBM XT I wrote the entire program before I tried to get it to compile. The first time I tried to compile there were oodles of errors. It took me about a week to fix all those, then I tried to run it and of course got a GPF.

    At that point I gave up. Fortunately I did a whole lot better with Pascal that same term in CD 10, Introduction To Algorithms in which our final project was a full-featured color vector graphic editor that didn't support saving to disk or printing but was otherwise equivalent to the 8-Bit color MacDraw, but with a tablet interface.

    We used the 68000-based HP Chipmunk with the UCSD P-System. The 8086 DOS XT totally 5ux0r3d compared to the 68k with the P-System. And in fact I still have all my floppies for both boxen and will be getting them out of storage sometime later this year.

    I used to have a real extension page about amateur telescope making but then that site dropped dead. I'll put up a placeholder page [warplife.com] then move my old pages over to warplife.com this weekend.

    When I lived in Owl's Head, Maine - in the Midcoast region, near Rockland - one of the few things that gave me real relief from the Dot-Com Crash was to work on an eight-inch Ritchey-Chrétien that was intended to be a scope I could take in airline carry-on luggage. That's a real good application for an amateur RC.

    But I've decided to abandon the effort then start all over again in part because I drilled too far through the primary so that the plug fell out. You don't want to do that rather what you do is to drill most of the way towards the end of fine grinding so as to relieve stresses in the glass then cut all the way through after you're done figuring. To cut all the way through is not insurmountable as you can glue the plug back in with beeswax but it can be troublesome due to edge effects around the center hole that are quite difficult to correct during polishing and figuring.

    The other reason is that I found out only after starting work on my 8" RC that one can take ten-inch scopes on a plane. The ratio of luminosity scales as the square of the diameter in this case 100/64 or 1.563. In my actual experience of having built both 8" and 10" Newtonians is that the extra effort required for the larger mirror is well worth it.

    DO NOT ATTEMPT AN RC FOR YOUR FIRST SCOPE! YOU HAVE BEEN WARNED.

    Don't attempt a Cassegrain either. It is a huge PITA to test RC and Cass secondary mirrors and so are quite advanced projects. Make an 8" F/8 - that is, a 64" or five foot four-inch focal length Newtonian.

    Such a long tube won't fit in most of today's cars and so is mostly suitable only for backyard use. Make a short ten inch for your second scope. You really don't want really low F-Ratios for the larger mirrors - the aberrations don't scale the right way. Really the shortest you should make a ten-inch would be 50" or 4" 2" which would fit in most cars either if you have a wagon, a seat that folds down to expose the trunk or a cargo bin on the roof.

    You want the long F-Ratio for your first scope because the final figured curve must be a Paraboloid Of Revolution. The Foucault Test works really really well for figuring spheres but again is a huge PITA for any other kinds of curves. But the difference between a sphere and the required Paraboloid is smaller and smaller as the F-Ratio increases. It was once quite common for beginners to start with a 6" mirror with a 96" focal length, in which case there is no discernible difference between a sphere and an optically-perfect paraboloid, but again in my own experience the 6" doesn't capture nearly enough light and so is quite disappointing for nebulae.

    The 6" works well for Venus, Jupitar and Saturn but not for Mars. For Mars you really want a 12" or larger mirror and a long focal length with a clock-driven equatorial mount.

    Mirror grinding kits do NOT come with instructions. You need a book [willbell.com]. I recommend you purchase Jean Texereau's How To Make A Telescope [willbell.com] 2nd Edition.

    The 1st worked out real well for me. I used Texereau's recipe for chemically silvering my 6" mirror but again silvering is a huge PITA, it requires cleaning the glass with Fuming Nitric Acid which would get the FBI breaking down your door if you don't obtain it the right way and the very slightest trace of impurities will totally ruin your coat.

    Instead get your mirror Vacuum Aluminized for $50 or $75 dollars. There's lots of shops that do that, many of them operated by other amateurs. (Aluminizing again is an advanced project).

    I was supposed to be recording Michael David Crawford LIVE! On Broadway [soggywizards.com] (and Morrison, Portland, Oregon) The Rough Draft right now. It's working well so far so I'll stop writing now.

    --
    Yes I Have No Bananas. [gofundme.com]
  • (Score: 3, Insightful) by acid andy on Tuesday August 21 2018, @02:05PM

    by acid andy (1683) on Tuesday August 21 2018, @02:05PM (#724161) Homepage Journal

    I love that they're pressing ahead with attempts at real-time ray-tracing. A few years ago I figured it must be on the horizon at some point. Then low res tech demos started popping up. Then higher res stuff with artifacts, and the prospect of maybe cleaning it up with post processing. Now that the hardware is starting to be built, it's only going to get better and better as time goes on. Over the last 10 years or so I felt like gaming and graphics tech were sort of plateauing a bit but the recent advancements with VR and now this are starting to get me a bit more excited again. It's just a shame that most of the things that use it will probably constantly phone home to their servers and maybe be infested with microtransactions. Time to write more open source games!

    --
    Master of the science of the art of the science of art.
  • (Score: 0) by Anonymous Coward on Tuesday August 21 2018, @03:03PM (2 children)

    by Anonymous Coward on Tuesday August 21 2018, @03:03PM (#724194)

    At least not to me

    • (Score: 2) by acid andy on Tuesday August 21 2018, @03:25PM (1 child)

      by acid andy (1683) on Tuesday August 21 2018, @03:25PM (#724211) Homepage Journal

      Yeah, every time I go shopping for a graphics card I have to look up lists of all the model numbers because they keep changing the numbering schemes so they're hard to remember and compare. A high end card from a few years back might perform better than the latest budget one and laptop cards complicate it even further. Maybe it's my own fault for not constantly following reviews of the latest hardware like I used to, but I save a lot more money this way! The rate of progress slowed so much, it got boring anyway.

      --
      Master of the science of the art of the science of art.
      • (Score: 0) by Anonymous Coward on Wednesday August 22 2018, @04:12AM

        by Anonymous Coward on Wednesday August 22 2018, @04:12AM (#724543)

        I resort to using those comparison sites and only having 3 or 4 to look at. I gave up last time. It was easier to just buy a laptop.

(1)