Slash Boxes

SoylentNews is people

posted by martyb on Tuesday January 08 2019, @09:16AM   Printer-friendly
from the picture-this dept.

The NVIDIA GeForce RTX 2060 6GB Founders Edition Review: Not Quite Mainstream

In the closing months of 2018, NVIDIA finally released the long-awaited successor to the Pascal-based GeForce GTX 10 series: the GeForce RTX 20 series of video cards. Built on their new Turing architecture, these GPUs were the biggest update to NVIDIA's GPU architecture in at least half a decade, leaving almost no part of NVIDIA's architecture untouched.

So far we've looked at the GeForce RTX 2080 Ti, RTX 2080, and RTX 2070 – and along with the highlights of Turing, we've seen that the GeForce RTX 20 series is designed on a hardware and software level to enable realtime raytracing and other new specialized features for games. While the RTX 2070 is traditionally the value-oriented enthusiast offering, NVIDIA's higher price tags this time around meant that even this part was $500 and not especially value-oriented. Instead, it would seem that the role of the enthusiast value offering is going to fall to the next member in line of the GeForce RTX 20 family. And that part is coming next week.

Launching next Tuesday, January 15th is the 4th member of the GeForce RTX family: the GeForce RTX 2060 (6GB). Based on a cut-down version of the same TU106 GPU that's in the RTX 2070, this new part shaves off some of RTX 2070's performance, but also a good deal of its price tag in the process

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance

Original Submission

Related Stories

Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance 23 comments

NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October

NVIDIA's Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA's GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA's new Turing GPU architecture and built on TSMC's 12nm "FFN" process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA's most important GPU architecture since 2006's Tesla GPU architecture (G80 GPU), and from a features standpoint it's clear that he's not overstating matters.

[...] So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA's RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

Nvidia has confirmed that the machine learning capabilities (tensor cores) of the GPU will used to smooth out problems with ray-tracing. Real-time AI denoising (4m17s) will be used to reduce the amount of samples per pixel needed to achieve photorealism.

Previously: Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations

Related: Real-time Ray-tracing at GDC 2014

Original Submission

AMD and Nvidia's Latest GPUs Are Expensive and Unappealing 25 comments

AMD, Nvidia Have Launched the Least-Appealing GPU Upgrades in History

Yesterday, AMD launched the Radeon VII, the first 7nm GPU. The card is intended to compete with Nvidia's RTX family of Turing-class GPUs, and it does, broadly matching the RTX 2080. It also matches the RTX 2080 on price, at $700. Because this card began life as a professional GPU intended for scientific computing and AI/ML workloads, it's unlikely that we'll see lower-end variants. That section of AMD's product stack will be filled by 7nm Navi, which arrives later this year.

Navi will be AMD's first new 7nm GPU architecture and will offer a chance to hit 'reset' on what has been, to date, the least compelling suite of GPU launches AMD and Nvidia have ever collectively kicked out the door. Nvidia has relentlessly moved its stack pricing higher while holding performance per dollar mostly constant. With the RTX 2060 and GTX 1070 Ti fairly evenly matched across a wide suite of games, the question of whether the RTX 2060 is better priced largely hinges on whether you stick to formal launch pricing for both cards or check historical data for actual price shifts.

Such comparisons are increasingly incidental, given that Pascal GPU prices are rising and cards are getting harder to find, but they aren't meaningless for people who either bought a Pascal GPU already or are willing to consider a used card. If you're an Nvidia fan already sitting on top of a high-end Pascal card, Turing doesn't offer you a great deal of performance improvement.

AMD has not covered itself in glory, either. The Radeon VII is, at least, unreservedly faster than the Vega 64. There's no equivalent last-generation GPU in AMD's stack to match it. But it also duplicates the Vega 64's overall power and noise profile, limiting the overall appeal, and it matches the RTX 2080's bad price. A 1.75x increase in price for a 1.32x increase in 4K performance isn't a great ratio even by the standards of ultra-high-end GPUs, where performance typically comes with a price penalty.

Rumors and leaks have suggested that Nvidia will release a Turing-based GPU called the GTX 1660 Ti (which has also been referred to as "1160"), with a lower price but missing the dedicated ray-tracing cores of the RTX 2000-series. AMD is expected to release "7nm" Navi GPUs sometime during 2019.

Radeon VII launch coverage also at AnandTech, Tom's Hardware.

Related: AMD Returns to the Datacenter, Set to Launch "7nm" Radeon Instinct GPUs for Machine Learning in 2018
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
AMD Announces "7nm" Vega GPUs for the Enterprise Market
Nvidia Announces RTX 2060 GPU
AMD Announces Radeon VII GPU, Teases Third-Generation Ryzen CPU
AMD Responds to Radeon VII Short Supply Rumors

Original Submission

Nvidia Ditches the Ray-Tracing Cores with Lower-Priced GTX 1660 Ti 6 comments

The NVIDIA GeForce GTX 1660 Ti Review, Feat. EVGA XC GAMING: Turing Sheds RTX for the Mainstream Market

When NVIDIA put their plans for their consumer Turing video cards into motion, the company bet big, and in more ways than one. In the first sense, NVIDIA dedicated whole logical blocks to brand-new graphics and compute features – ray tracing and tensor core compute – and they would need to sell developers and consumers alike on the value of these features, something that is no easy task. In the second sense however, NVIDIA also bet big on GPU die size: these new features would take up a lot of space on the 12nm FinFET process they'd be using.

The end result is that all of the Turing chips we've seen thus far, from TU102 to TU106, are monsters in size; even TU106 is 445mm2, never mind the flagship TU102. And while the full economic consequences that go with that decision are NVIDIA's to bear, for the first year or so of Turing's life, all of that die space that is driving up NVIDIA's costs isn't going to contribute to improving NVIDIA's performance in traditional games; it's a value-added feature. Which is all workable for NVIDIA in the high-end market where they are unchallenged and can essentially dictate video card prices, but it's another matter entirely once you start approaching the mid-range, where the AMD competition is alive and well.

Consequently, in preparing for their cheaper, sub-$300 Turing cards, NVIDIA had to make a decision: do they keep the RT and tensor cores in order to offer these features across the line – at a literal cost to both consumers and NVIDIA – or do they drop these features in order to make a leaner, more competitive chip? As it turns out, NVIDIA has opted for the latter, producing a new Turing GPU that is leaner and meaner than anything that's come before it, but also very different from its predecessors for this reason.

That GPU is TU116, and it's part of what will undoubtedly become a new sub-family of Turing GPUs for NVIDIA as the company starts rolling out Turing into the lower half of the video card market. Kicking things off in turn for this new GPU is NVIDIA's latest video card, the GeForce GTX 1660 Ti. Launching today at $279, it's destined to replace NVIDIA's GTX 1060 6GB in the market and is NVIDIA's new challenger for the mainstream video card market.

Compared to the RTX 2060 Founders Edition, GTX 1660 Ti has fewer CUDA[*] cores, lower memory clock, and the same amount of VRAM (6 GB), but it has higher core/boost clocks and lower TDP (120 W vs. 160 W). GTX 1660 Ti has roughly 85% the performance of the RTX 2060, at 80% the MSRP ($279 vs. $349).

Nvidia Refreshes RTX 2000-Series GPUs With "Super" Branding 9 comments

The GeForce RTX 2070 Super & RTX 2060 Super Review: Smaller Numbers, Bigger Performance

NVIDIA is launching a mid-generation kicker for their mid-to-high-end video card lineup in the form of their GeForce RTX 20 series Super cards. Based on the same family of Turing GPUs as the original GeForce RTX 20 series cards, these new Super cards – all suffixed Super, appropriately enough – come with new configurations and new clockspeeds. They are, essentially, NVIDIA's 2019 card family for the $399+ video card market.

When they are released on July 9th, the GeForce RTX 20 series Super cards are going to be sharing store shelves with the rest of the GeForce RTX 20 series cards. Some cards like the RTX 2080 and RTX 2070 are set to go away, while other cards like the RTX 2080 Ti and RTX 2060 will remain on the market as-is. In practice, it's probably best to think of the new cards as NVIDIA executing as either a price cut or a spec bump – depending on if you see the glass as half-empty or half-full – all without meaningfully changing their price tiers.

In terms of performance, the RTX 2060 and RTX 2070 Super cards aren't going to bring anything new to the table. In fact if we're being blunt, the RTX 2070 Super is basically a slightly slower RTX 2080, and the RTX 2060 Super may as well be the RTX 2070. So instead, what has changed is the price that these performance levels are available at, and ultimately the performance-per-dollar ratios in parts of NVIDIA's lineup. The performance of NVIDIA's former $699 and $499 cards will now be available for $499 and $399, respectively. This leaves the vanilla RTX 2060 to hold the line at $349, and the upcoming RTX 2080 Super to fill the $699 spot. Which means if you're in the $400-$700 market for video cards, your options are about to get noticeably faster.

Also at Tom's Hardware, The Verge, and Ars Technica.

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Nvidia Announces RTX 2060 GPU
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing

Related: AMD and Intel at Computex 2019: First Ryzen 3000-Series CPUs and Navi GPU Announced
AMD Details Three Navi GPUs and First Mainstream 16-Core CPU

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Tuesday January 08 2019, @11:07AM (7 children)

    by Anonymous Coward on Tuesday January 08 2019, @11:07AM (#783619)

    I mean I'm all for more GPU power, but at this point in time, the GPU's RAM is a lot more important to me than the core itself, because most software outstrips the RAM at a much higher rate than it outstrips the actual GPU core.

    I'm still running a 2010 era HD4770 512M GDDR5 and a 2012-2013 era GT720 2GB DDR3 (64 BIT!) GPU. The former still has better DP FP performance than most newer cards (but only had emulated OCL 1.0 support... still enough to earn some BTC over the winter of 2012) and the latter, despite DDR3, outperforms the former card in anything that doesn't require a huge amount of texture throughput. The latter card will even run most modern games as long as you use very low, low or medium textures (depending on the game) and turn off AA/Anisotropic filtering. Some games will even run 30-60 fps with those features on, being more cpu than gpu bound. And these are games run at 720-1080p.

    If you move up to ultra-high end gaming, then you generally want ultra graphic and 1080-2160p at a minimum of 60Hz, and for FPS gamers, 90-240Hz. When you start looking at the memory requirements for those, you will want triple buffering plus ultra scale textures, will little or no streaming to the card that could cause stuttering. To do that you need an SSD, lots of RAM, a nice CPU, and a GPU with the maximum texture memory possible. Given that the current mid,upper mid AMD cards are 2.5-5TFLOPS (The 4770 and GT720 'I mentioned being .96TFLOP and ~.7 TFLOPS respectively) most games should be memory rather than GPU bound, unless they are doing onboard physics processing (which will increase that ram requirement even more) making these RTX20x0 cards seem like a bad buy if you want to keep them for next year or the year after's games.

    That said, if you're doing GPGPU or other fun compute related apps, they look great as an entry level offering compared to the Tesla cards, but given the new Nvidia driver licensing, they can't be used for commercial/datacenter apps, except virtual currency mining, which makes the extra work for more memory and more FLOPS/$ from AMD look slightly more appealing, if you will need to scale out to a lot of GPUs in the future (thanks to the open source GPGPU drivers for AMD hardware that sidestep most of the Nvidia driver licensing issues.)

    • (Score: 0) by Anonymous Coward on Tuesday January 08 2019, @11:14AM (1 child)

      by Anonymous Coward on Tuesday January 08 2019, @11:14AM (#783620)

      Having looked up the RTX2060 specs on wikipedia, if you're using FP16 tensor processing, the 50TFLOP of Tensor FP16, and 13GFLOP of half precision FP16 make it a peerless alternative to the AMD cards. But the VRAM limitations of the card are even more likely to become an issue with any sort of machine learning application suitable for 50GFLOP/s of processing.

      • (Score: 0) by Anonymous Coward on Tuesday January 08 2019, @11:22AM

        by Anonymous Coward on Tuesday January 08 2019, @11:22AM (#783622)

        While it doesn't have the 50TFLOP of Tensor processing or the Raytracing extensios, it hits all the same numbers as the GT2060 for about the same price. It also has the funky signed firmware DRM like the Nvidia cards, but at least has some open source graphics support, unlike the GM2xx+ Nvidia hardware, despite Nouveau's best efforts.

    • (Score: 2) by takyon on Tuesday January 08 2019, @02:27PM (3 children)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday January 08 2019, @02:27PM (#783652) Journal []

      6 GB was pretty high circa 2015 (GTX 980 Ti). If you aren't running 4K resolution, does it matter? Does it matter even with 4K resolution?

      (I would assume that the ray-tracing capabilties on RTX 2060 are not sufficient for 4K60.)

      [SIG] 10/28/2017: Soylent Upgrade v14 []
      • (Score: 1, Interesting) by Anonymous Coward on Tuesday January 08 2019, @03:02PM (2 children)

        by Anonymous Coward on Tuesday January 08 2019, @03:02PM (#783677)

        The RTX 2080 can barely manage the fake raytracing at 30fps in 1080p

        • (Score: 2) by bob_super on Tuesday January 08 2019, @06:00PM

          by bob_super (1357) on Tuesday January 08 2019, @06:00PM (#783773)

          Follows the rule of First Gen Of Cool New Feature.

        • (Score: 2) by bzipitidoo on Tuesday January 08 2019, @06:58PM

          by bzipitidoo (4388) on Tuesday January 08 2019, @06:58PM (#783787) Journal

          Guess that "RTX" stands for "Ray Tracing eXtreme", and that the name is more of a wishful goal and not an accomplishment?

          But then, ray tracing takes an awful lot of computation.

    • (Score: 2) by shortscreen on Tuesday January 08 2019, @08:42PM

      by shortscreen (2252) on Tuesday January 08 2019, @08:42PM (#783839) Journal

      So I take it that whatever you are running is choking on the 512MB of video memory? It's hard to see how those cards could be comparable otherwise. GT720 has only half the memory bus, half the shaders/TMUs/ROPs compared to a GT640, and the latter is roughly tied with a 4770 in old benchmarks. The 4770 is a DX10.1 card so it doesn't run the latest stuff.

  • (Score: 1, Insightful) by Anonymous Coward on Tuesday January 08 2019, @06:59PM (1 child)

    by Anonymous Coward on Tuesday January 08 2019, @06:59PM (#783788)

    fuck you nvidia!

    • (Score: 1, Funny) by Anonymous Coward on Tuesday January 08 2019, @08:54PM

      by Anonymous Coward on Tuesday January 08 2019, @08:54PM (#783849)

      Don't you mean MDC?