When NVIDIA put their plans for their consumer Turing video cards into motion, the company bet big, and in more ways than one. In the first sense, NVIDIA dedicated whole logical blocks to brand-new graphics and compute features – ray tracing and tensor core compute – and they would need to sell developers and consumers alike on the value of these features, something that is no easy task. In the second sense however, NVIDIA also bet big on GPU die size: these new features would take up a lot of space on the 12nm FinFET process they'd be using.
The end result is that all of the Turing chips we've seen thus far, from TU102 to TU106, are monsters in size; even TU106 is 445mm2, never mind the flagship TU102. And while the full economic consequences that go with that decision are NVIDIA's to bear, for the first year or so of Turing's life, all of that die space that is driving up NVIDIA's costs isn't going to contribute to improving NVIDIA's performance in traditional games; it's a value-added feature. Which is all workable for NVIDIA in the high-end market where they are unchallenged and can essentially dictate video card prices, but it's another matter entirely once you start approaching the mid-range, where the AMD competition is alive and well.
Consequently, in preparing for their cheaper, sub-$300 Turing cards, NVIDIA had to make a decision: do they keep the RT and tensor cores in order to offer these features across the line – at a literal cost to both consumers and NVIDIA – or do they drop these features in order to make a leaner, more competitive chip? As it turns out, NVIDIA has opted for the latter, producing a new Turing GPU that is leaner and meaner than anything that's come before it, but also very different from its predecessors for this reason.
That GPU is TU116, and it's part of what will undoubtedly become a new sub-family of Turing GPUs for NVIDIA as the company starts rolling out Turing into the lower half of the video card market. Kicking things off in turn for this new GPU is NVIDIA's latest video card, the GeForce GTX 1660 Ti. Launching today at $279, it's destined to replace NVIDIA's GTX 1060 6GB in the market and is NVIDIA's new challenger for the mainstream video card market.
Compared to the RTX 2060 Founders Edition, GTX 1660 Ti has fewer CUDA[*] cores, lower memory clock, and the same amount of VRAM (6 GB), but it has higher core/boost clocks and lower TDP (120 W vs. 160 W). GTX 1660 Ti has roughly 85% the performance of the RTX 2060, at 80% the MSRP ($279 vs. $349).
Nvidia may also release a GTX 1660, GTX 1650, and possibly a GTX 1680 (a non-RTX flagship).
[*] CUDA: "When it was first introduced by Nvidia, the name CUDA was an acronym for Compute Unified Device Architecture, but Nvidia subsequently dropped the use of the acronym."
Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Related Stories
NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October
NVIDIA's Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA's GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA's new Turing GPU architecture and built on TSMC's 12nm "FFN" process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA's most important GPU architecture since 2006's Tesla GPU architecture (G80 GPU), and from a features standpoint it's clear that he's not overstating matters.
[...] So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA's RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.
The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.
NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.
Nvidia has confirmed that the machine learning capabilities (tensor cores) of the GPU will used to smooth out problems with ray-tracing. Real-time AI denoising (4m17s) will be used to reduce the amount of samples per pixel needed to achieve photorealism.
Previously: Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Related: Real-time Ray-tracing at GDC 2014
The NVIDIA GeForce RTX 2060 6GB Founders Edition Review: Not Quite Mainstream
In the closing months of 2018, NVIDIA finally released the long-awaited successor to the Pascal-based GeForce GTX 10 series: the GeForce RTX 20 series of video cards. Built on their new Turing architecture, these GPUs were the biggest update to NVIDIA's GPU architecture in at least half a decade, leaving almost no part of NVIDIA's architecture untouched.
So far we've looked at the GeForce RTX 2080 Ti, RTX 2080, and RTX 2070 – and along with the highlights of Turing, we've seen that the GeForce RTX 20 series is designed on a hardware and software level to enable realtime raytracing and other new specialized features for games. While the RTX 2070 is traditionally the value-oriented enthusiast offering, NVIDIA's higher price tags this time around meant that even this part was $500 and not especially value-oriented. Instead, it would seem that the role of the enthusiast value offering is going to fall to the next member in line of the GeForce RTX 20 family. And that part is coming next week.
Launching next Tuesday, January 15th is the 4th member of the GeForce RTX family: the GeForce RTX 2060 (6GB). Based on a cut-down version of the same TU106 GPU that's in the RTX 2070, this new part shaves off some of RTX 2070's performance, but also a good deal of its price tag in the process
Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Crytek has showcased a new real-time raytracing demo which is said to run on most mainstream, contemporary GPUs from NVIDIA and AMD. The minds behind one of the most visually impressive FPS franchise, Crysis, have their new "Noir" demo out which was run on an AMD Radeon RX Vega graphics card which shows that raytracing is possible even without an NVIDIA RTX graphics card.
[...] Crytek states that the experimental ray tracing feature based on CRYENGINE's Total Illumination used to create the demo is both API and hardware agnostic, enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.
Related: Real-time Ray-tracing at GDC 2014
Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Q2VKPT: An Open Source Game Demo with Real-Time Path Tracing
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing
Nvidia Ditches the Ray-Tracing Cores with Lower-Priced GTX 1660 Ti
NVIDIA Releases DirectX Raytracing Driver for GTX Cards; Posts Trio of DXR Demos
Last month at GDC 2019, NVIDIA revealed that they would finally be enabling public support for DirectX Raytracing on non-RTX cards. Long baked into the DXR specification itself – which is designed [to] encourage ray tracing hardware development while also allowing it to be implemented via traditional compute shaders – the addition of DXR support in cards without hardware support for it is a small but important step in the deployment of the API and its underlying technology. At the time of their announcement, NVIDIA announced that this driver would be released in April, and now this morning, NVIDIA is releasing the new driver.
As we covered in last month's initial announcement of the driver, this has been something of a long time coming for NVIDIA. The initial development of DXR and the first DXR demos (including the Star Wars Reflections demo) were all handled on cards without hardware RT acceleration; in particular NVIDIA Volta-based video cards. Microsoft used their own fallback layer for a time, but for the public release it was going to be up to GPU manufacturers to provide support, including their own fallback layer. So we have been expecting the release of this driver in some form for quite some time.
Of course, the elephant in the room in enabling DXR on cards without RT hardware is what it will do for performance – or perhaps the lack thereof.
Also at Wccftech.
See also: NVIDIA shows how much ray-tracing sucks on older GPUs
[For] stuff that really adds realism, like advanced shadows, global illumination and ambient occlusion, the RTX 2080 Ti outperforms the 1080 Ti by up to a factor of six.
To cite some specific examples, Port Royal will run on the RTX 2080 Ti at 53.3 fps at 2,560 x 1,440 with advanced reflections and shadows, along with DLSS anti-aliasing, turned on. The GTX 1080, on the other hand, will run at just 9.2 fps with those features enabled and won't give you any DLSS at all. That effectively makes the feature useless on those cards for that game. With basic reflections on Battlefield V, on the other hand, you'll see 30 fps on the 1080 Ti compared to 68.3 on the 2080 Ti.
Previously:
Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Q2VKPT: An Open Source Game Demo with Real-Time Path Tracing
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing
Nvidia Ditches the Ray-Tracing Cores with Lower-Priced GTX 1660 Ti
Crytek Demos Real-Time Raytracing for AMD and Non-RTX Nvidia GPUs
(Score: 4, Insightful) by Entropy on Saturday February 23 2019, @11:26PM (5 children)
There's a huge market for a 1080 replacement, especially the 1080Ti. RTX on the non-$1000 cards isn't adequate enough in speed to actually use so it's just taking up space and driving up costs. A ton of people have been waiting for this announcement.
(Score: 2) by takyon on Saturday February 23 2019, @11:55PM (2 children)
Too bad AMD's response has been slow and anemic. Hopefully they get Navi out by October and it puts some pressure on Nvidia in some segment.
Intel could join the market with discrete gaming GPUs in 2020 or 2021.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Informative) by Entropy on Sunday February 24 2019, @12:40AM (1 child)
I massively prefer buying AMD. I'd imagine they will drop their prices in response, and of course they have new things in their pipeline. The new Nvidia(RTX) hasn't been exactly well received.
(Score: 1, Interesting) by Anonymous Coward on Sunday February 24 2019, @01:33PM
Ray tracing and tensor stuff doesn't currently make sense outside of the Quadro line. Reading the use cases [nvidia.com] they're already a must have in production and VFX pipelines. From there, CAD/CAM and image editing will use the tensor cores and eventually frame rates and resolutions will be good enough and affordable enough for RT raytracing in gaming.
(Score: 2) by Hyperturtle on Sunday February 24 2019, @03:00PM (1 child)
No offense, but I don't understand what you are saying. Why would you think this is a viable replacement for the 1080 or 1080ti cards? Those cards are superior to the 1660ti in almost every way.
I have two of them; it'd be silly to replace them with two 1660tis. The 1660ti is, as the article says, 85% of the performance of the 2060, and the 2060 has 62% of the "non-RTX" performance of the 1080ti... and about 65% of the performance of a 1080.
This card has just over half the performance of the 1080ti, and just over half the ram (6GB versus 11GB).
Here's a site that has a direct comparison between the 1660ti and the 1080ti.
https://gpu.userbenchmark.com/Compare/Nvidia-GTX-1660-Ti-vs-Nvidia-GTX-1080-Ti/4037vs3918 [userbenchmark.com]
I don't know anyone with 1080s or 1080tis, using them for any purpose, that has been waiting for this announcement. Or more properly, what this announcement contains. They wanted a new 1080 series type of card without the RTX; this is not that. They will want to see the 1680 and 1680tis, presuming that's what it is called, when those come out. If they are priced the same as the original 1080 and 1080tis, people may upgrade--provided they don't have to replace their waterblocks and other custom cooling to do it.
Downgrading to this is a real step backwards; I myself have two 1080tis on seperate loops using something I built myself. I replaced all the thermal pads and used liquid metal TIM, etc. First time I ever did that, and it's not something I expected to repeat yearly. I know many performance addicts that are hardware hackers; you can't OC a 1660 to match even a non-OC'd 1080ti. And you can't match an OC'd 1080ti without going with the "titan" class GPU or jumping to 2080s and even tweaking those. This is just a factory overtaxed clunker with a huge heatsink to make up for the lack of proper design due to the fast turnaround time, intended to fill a void they expected their more expensive cards would fill.
Perhaps worst of all in the performance crowd, those 20x0 series cards are not water block compatible with the previous generation, even with modding--the gpu chips are too large, so you can't dremel it to fit when there isn't enough surface area to begin with. Water blocks for gpus do not come cheap unless they are already ancient and on clearance. Its expensive to chase the dragon like that; this isn't a dragon anyone wants to catch except for people wanting to upgrade from their 750tis or 980tis who may have deliberately missed the 1080 series boat, hoping the next generation would be a value worth upgrading to. This card indicates the 20x0 series wasn't a value--but no one will downgrade to this to ostensibly save space and costs. These 1660tis are triple wides! Have you looked at the photos? With a waterblock, the 1080 and 1080ti are single slot cards; this would be a huge expensive step backwards to lose performance, requiring a lot of modification to even fit, and a lot of performance loss just to have the newest.
Anyone with these cards installed already aren't wishing for a shorter, fatter card to fill two more slots--they already have made the space to fit what they have.
This card might be good for people that haven't upgraded or are just looking for a new card that isn't a replacement, but this is no upgrade for people already possessing any 1080 series card. Maybe I do not frequent the same places you frequent, but this card is most likely a reactionary offering to prevent the loss of sales to AMD, aimed at people that don't, or won't, spend money on higher end cards, and ultimately were unlikely to ever get one.
(Score: 2) by bob_super on Monday February 25 2019, @06:27AM
You could have just started with the price, to point out how it's not the same segment.