Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday January 08 2019, @09:16AM   Printer-friendly
from the picture-this dept.

The NVIDIA GeForce RTX 2060 6GB Founders Edition Review: Not Quite Mainstream

In the closing months of 2018, NVIDIA finally released the long-awaited successor to the Pascal-based GeForce GTX 10 series: the GeForce RTX 20 series of video cards. Built on their new Turing architecture, these GPUs were the biggest update to NVIDIA's GPU architecture in at least half a decade, leaving almost no part of NVIDIA's architecture untouched.

So far we've looked at the GeForce RTX 2080 Ti, RTX 2080, and RTX 2070 – and along with the highlights of Turing, we've seen that the GeForce RTX 20 series is designed on a hardware and software level to enable realtime raytracing and other new specialized features for games. While the RTX 2070 is traditionally the value-oriented enthusiast offering, NVIDIA's higher price tags this time around meant that even this part was $500 and not especially value-oriented. Instead, it would seem that the role of the enthusiast value offering is going to fall to the next member in line of the GeForce RTX 20 family. And that part is coming next week.

Launching next Tuesday, January 15th is the 4th member of the GeForce RTX family: the GeForce RTX 2060 (6GB). Based on a cut-down version of the same TU106 GPU that's in the RTX 2070, this new part shaves off some of RTX 2070's performance, but also a good deal of its price tag in the process

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Interesting) by Anonymous Coward on Tuesday January 08 2019, @11:07AM (7 children)

    by Anonymous Coward on Tuesday January 08 2019, @11:07AM (#783619)

    I mean I'm all for more GPU power, but at this point in time, the GPU's RAM is a lot more important to me than the core itself, because most software outstrips the RAM at a much higher rate than it outstrips the actual GPU core.

    I'm still running a 2010 era HD4770 512M GDDR5 and a 2012-2013 era GT720 2GB DDR3 (64 BIT!) GPU. The former still has better DP FP performance than most newer cards (but only had emulated OCL 1.0 support... still enough to earn some BTC over the winter of 2012) and the latter, despite DDR3, outperforms the former card in anything that doesn't require a huge amount of texture throughput. The latter card will even run most modern games as long as you use very low, low or medium textures (depending on the game) and turn off AA/Anisotropic filtering. Some games will even run 30-60 fps with those features on, being more cpu than gpu bound. And these are games run at 720-1080p.

    If you move up to ultra-high end gaming, then you generally want ultra graphic and 1080-2160p at a minimum of 60Hz, and for FPS gamers, 90-240Hz. When you start looking at the memory requirements for those, you will want triple buffering plus ultra scale textures, will little or no streaming to the card that could cause stuttering. To do that you need an SSD, lots of RAM, a nice CPU, and a GPU with the maximum texture memory possible. Given that the current mid,upper mid AMD cards are 2.5-5TFLOPS (The 4770 and GT720 'I mentioned being .96TFLOP and ~.7 TFLOPS respectively) most games should be memory rather than GPU bound, unless they are doing onboard physics processing (which will increase that ram requirement even more) making these RTX20x0 cards seem like a bad buy if you want to keep them for next year or the year after's games.

    That said, if you're doing GPGPU or other fun compute related apps, they look great as an entry level offering compared to the Tesla cards, but given the new Nvidia driver licensing, they can't be used for commercial/datacenter apps, except virtual currency mining, which makes the extra work for more memory and more FLOPS/$ from AMD look slightly more appealing, if you will need to scale out to a lot of GPUs in the future (thanks to the open source GPGPU drivers for AMD hardware that sidestep most of the Nvidia driver licensing issues.)

    Starting Score:    0  points
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  

    Total Score:   1  
  • (Score: 0) by Anonymous Coward on Tuesday January 08 2019, @11:14AM (1 child)

    by Anonymous Coward on Tuesday January 08 2019, @11:14AM (#783620)

    Having looked up the RTX2060 specs on wikipedia, if you're using FP16 tensor processing, the 50TFLOP of Tensor FP16, and 13GFLOP of half precision FP16 make it a peerless alternative to the AMD cards. But the VRAM limitations of the card are even more likely to become an issue with any sort of machine learning application suitable for 50GFLOP/s of processing.

    • (Score: 0) by Anonymous Coward on Tuesday January 08 2019, @11:22AM

      by Anonymous Coward on Tuesday January 08 2019, @11:22AM (#783622)

      While it doesn't have the 50TFLOP of Tensor processing or the Raytracing extensios, it hits all the same numbers as the GT2060 for about the same price. It also has the funky signed firmware DRM like the Nvidia cards, but at least has some open source graphics support, unlike the GM2xx+ Nvidia hardware, despite Nouveau's best efforts.

  • (Score: 2) by takyon on Tuesday January 08 2019, @02:27PM (3 children)

    by takyon (881) <{takyon} {at} {soylentnews.org}> on Tuesday January 08 2019, @02:27PM (#783652) Journal

    https://en.wikipedia.org/wiki/List_of_Nvidia_graphics_processing_units#GeForce_700_series [wikipedia.org]

    6 GB was pretty high circa 2015 (GTX 980 Ti). If you aren't running 4K resolution, does it matter? Does it matter even with 4K resolution?

    (I would assume that the ray-tracing capabilties on RTX 2060 are not sufficient for 4K60.)

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 1, Interesting) by Anonymous Coward on Tuesday January 08 2019, @03:02PM (2 children)

      by Anonymous Coward on Tuesday January 08 2019, @03:02PM (#783677)

      The RTX 2080 can barely manage the fake raytracing at 30fps in 1080p

      • (Score: 2) by bob_super on Tuesday January 08 2019, @06:00PM

        by bob_super (1357) on Tuesday January 08 2019, @06:00PM (#783773)

        Follows the rule of First Gen Of Cool New Feature.

      • (Score: 2) by bzipitidoo on Tuesday January 08 2019, @06:58PM

        by bzipitidoo (4388) on Tuesday January 08 2019, @06:58PM (#783787) Journal

        Guess that "RTX" stands for "Ray Tracing eXtreme", and that the name is more of a wishful goal and not an accomplishment?

        But then, ray tracing takes an awful lot of computation.

  • (Score: 2) by shortscreen on Tuesday January 08 2019, @08:42PM

    by shortscreen (2252) on Tuesday January 08 2019, @08:42PM (#783839) Journal

    So I take it that whatever you are running is choking on the 512MB of video memory? It's hard to see how those cards could be comparable otherwise. GT720 has only half the memory bus, half the shaders/TMUs/ROPs compared to a GT640, and the latter is roughly tied with a 4770 in old benchmarks. The 4770 is a DX10.1 card so it doesn't run the latest stuff.