Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday July 03 2019, @11:07PM   Printer-friendly
from the super-hyper-ultra-turbo dept.

The GeForce RTX 2070 Super & RTX 2060 Super Review: Smaller Numbers, Bigger Performance

NVIDIA is launching a mid-generation kicker for their mid-to-high-end video card lineup in the form of their GeForce RTX 20 series Super cards. Based on the same family of Turing GPUs as the original GeForce RTX 20 series cards, these new Super cards – all suffixed Super, appropriately enough – come with new configurations and new clockspeeds. They are, essentially, NVIDIA's 2019 card family for the $399+ video card market.

When they are released on July 9th, the GeForce RTX 20 series Super cards are going to be sharing store shelves with the rest of the GeForce RTX 20 series cards. Some cards like the RTX 2080 and RTX 2070 are set to go away, while other cards like the RTX 2080 Ti and RTX 2060 will remain on the market as-is. In practice, it's probably best to think of the new cards as NVIDIA executing as either a price cut or a spec bump – depending on if you see the glass as half-empty or half-full – all without meaningfully changing their price tiers.

In terms of performance, the RTX 2060 and RTX 2070 Super cards aren't going to bring anything new to the table. In fact if we're being blunt, the RTX 2070 Super is basically a slightly slower RTX 2080, and the RTX 2060 Super may as well be the RTX 2070. So instead, what has changed is the price that these performance levels are available at, and ultimately the performance-per-dollar ratios in parts of NVIDIA's lineup. The performance of NVIDIA's former $699 and $499 cards will now be available for $499 and $399, respectively. This leaves the vanilla RTX 2060 to hold the line at $349, and the upcoming RTX 2080 Super to fill the $699 spot. Which means if you're in the $400-$700 market for video cards, your options are about to get noticeably faster.

Also at Tom's Hardware, The Verge, and Ars Technica.

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Nvidia Announces RTX 2060 GPU
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing

Related: AMD and Intel at Computex 2019: First Ryzen 3000-Series CPUs and Navi GPU Announced
AMD Details Three Navi GPUs and First Mainstream 16-Core CPU


Original Submission

Related Stories

Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance 23 comments

NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October

NVIDIA's Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA's GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA's new Turing GPU architecture and built on TSMC's 12nm "FFN" process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA's most important GPU architecture since 2006's Tesla GPU architecture (G80 GPU), and from a features standpoint it's clear that he's not overstating matters.

[...] So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA's RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

Nvidia has confirmed that the machine learning capabilities (tensor cores) of the GPU will used to smooth out problems with ray-tracing. Real-time AI denoising (4m17s) will be used to reduce the amount of samples per pixel needed to achieve photorealism.

Previously: Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations

Related: Real-time Ray-tracing at GDC 2014


Original Submission

Nvidia Announces RTX 2060 GPU 10 comments

The NVIDIA GeForce RTX 2060 6GB Founders Edition Review: Not Quite Mainstream

In the closing months of 2018, NVIDIA finally released the long-awaited successor to the Pascal-based GeForce GTX 10 series: the GeForce RTX 20 series of video cards. Built on their new Turing architecture, these GPUs were the biggest update to NVIDIA's GPU architecture in at least half a decade, leaving almost no part of NVIDIA's architecture untouched.

So far we've looked at the GeForce RTX 2080 Ti, RTX 2080, and RTX 2070 – and along with the highlights of Turing, we've seen that the GeForce RTX 20 series is designed on a hardware and software level to enable realtime raytracing and other new specialized features for games. While the RTX 2070 is traditionally the value-oriented enthusiast offering, NVIDIA's higher price tags this time around meant that even this part was $500 and not especially value-oriented. Instead, it would seem that the role of the enthusiast value offering is going to fall to the next member in line of the GeForce RTX 20 family. And that part is coming next week.

Launching next Tuesday, January 15th is the 4th member of the GeForce RTX family: the GeForce RTX 2060 (6GB). Based on a cut-down version of the same TU106 GPU that's in the RTX 2070, this new part shaves off some of RTX 2070's performance, but also a good deal of its price tag in the process

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance


Original Submission

AMD and Nvidia's Latest GPUs Are Expensive and Unappealing 25 comments

AMD, Nvidia Have Launched the Least-Appealing GPU Upgrades in History

Yesterday, AMD launched the Radeon VII, the first 7nm GPU. The card is intended to compete with Nvidia's RTX family of Turing-class GPUs, and it does, broadly matching the RTX 2080. It also matches the RTX 2080 on price, at $700. Because this card began life as a professional GPU intended for scientific computing and AI/ML workloads, it's unlikely that we'll see lower-end variants. That section of AMD's product stack will be filled by 7nm Navi, which arrives later this year.

Navi will be AMD's first new 7nm GPU architecture and will offer a chance to hit 'reset' on what has been, to date, the least compelling suite of GPU launches AMD and Nvidia have ever collectively kicked out the door. Nvidia has relentlessly moved its stack pricing higher while holding performance per dollar mostly constant. With the RTX 2060 and GTX 1070 Ti fairly evenly matched across a wide suite of games, the question of whether the RTX 2060 is better priced largely hinges on whether you stick to formal launch pricing for both cards or check historical data for actual price shifts.

Such comparisons are increasingly incidental, given that Pascal GPU prices are rising and cards are getting harder to find, but they aren't meaningless for people who either bought a Pascal GPU already or are willing to consider a used card. If you're an Nvidia fan already sitting on top of a high-end Pascal card, Turing doesn't offer you a great deal of performance improvement.

AMD has not covered itself in glory, either. The Radeon VII is, at least, unreservedly faster than the Vega 64. There's no equivalent last-generation GPU in AMD's stack to match it. But it also duplicates the Vega 64's overall power and noise profile, limiting the overall appeal, and it matches the RTX 2080's bad price. A 1.75x increase in price for a 1.32x increase in 4K performance isn't a great ratio even by the standards of ultra-high-end GPUs, where performance typically comes with a price penalty.

Rumors and leaks have suggested that Nvidia will release a Turing-based GPU called the GTX 1660 Ti (which has also been referred to as "1160"), with a lower price but missing the dedicated ray-tracing cores of the RTX 2000-series. AMD is expected to release "7nm" Navi GPUs sometime during 2019.

Radeon VII launch coverage also at AnandTech, Tom's Hardware.

Related: AMD Returns to the Datacenter, Set to Launch "7nm" Radeon Instinct GPUs for Machine Learning in 2018
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
AMD Announces "7nm" Vega GPUs for the Enterprise Market
Nvidia Announces RTX 2060 GPU
AMD Announces Radeon VII GPU, Teases Third-Generation Ryzen CPU
AMD Responds to Radeon VII Short Supply Rumors


Original Submission

AMD and Intel at Computex 2019: First Ryzen 3000-Series CPUs and Navi GPU Announced 20 comments

At Computex 2019 in Taipei, AMD CEO Lisa Su gave a keynote presentation announcing the first "7nm" Navi GPU and Ryzen 3000-series CPUs. All of the products will support PCI Express 4.0.

Contrary to recent reports, AMD says that the Navi microarchitecture is not based on Graphics Core Next (GCN), but rather a new "RDNA" macroarchitecture ('R' for Radeon), although the extent of the difference is not clear. There is also no conflict with Nvidia's naming scheme; the 5000-series naming is a reference to the company's 50th anniversary.

AMD claims that Navi GPUs will have 25% better performance/clock and 50% better performance/Watt vs. Vega GPUs. AMD Radeon RX 5700 is the first "7nm" Navi GPU to be announced. It was compared with Nvidia's GeForce RTX 2070, with the RX 5700 outperforming the RTX 2070 by 10% in the AMD-favorable game Strange Brigade. Pricing and other launch details will be revealed on June 10.

AMD also announced the first five Ryzen 3000-series CPUs, all of which will be released on July 7:

CPUCores / ThreadsFrequencyTDPPrice
Ryzen 9 3900X12 / 243.8 - 4.6 GHz105 W$499
Ryzen 7 3800X8 / 163.9 - 4.5 GHz105 W$399
Ryzen 7 3700X8 / 163.6 - 4.4 GHz65 W$329
Ryzen 5 3600X6 / 123.8 - 4.4 GHz95 W$249
Ryzen 5 36006 / 123.6 - 4.2 GHz65 W$199

The Ryzen 9 3900X is the only CPU in the list using two core chiplets, each with 6 of 8 cores enabled. AMD has held back on releasing a 16-core monster for now. AMD compared the Ryzen 9 3900X to the $1,189 Intel Core i9-9920X, the Ryzen 7 3800X to the $499 Intel Core i9-9900K, and the Ryzen 7 3700X to the Intel Core i7-9700K, with the AMD chips outperforming the Intel chips in certain single and multi-threaded benchmarks (wait for the reviews before drawing any definitive conclusions). All five of the processors will come with a bundled cooler, as seen in this list.

AMD Details Three Navi GPUs and First Mainstream 16-Core CPU 30 comments

At AMD's keynote at the 2019 Electronic Entertainment Expo (E3), AMD CEO Lisa Su announced three new "7nm" Navi GPUs and a new CPU.

The AMD Radeon RX 5700 XT will have 2560 stream processors (40 compute units) capable of 9.75 TFLOPs of FP32 performance, with 8 GB of 14 Gbps GDDR6 VRAM. The price is $449. The AMD RX 5700 cuts that down to 2304 SPs (36 CUs), 7.9 TFLOPs, at $379. There is a higher clocked "50th anniversary" version of the 5700 XT that offers up to 10.14 teraflops for $499. A teraflop on one of these new cards supposedly means better graphics performance than older Polaris-based GPUs:

Looking at these clockspeed values then, in terms of raw throughput the new card is expected to get between 9 TFLOPs and 9.75 TFLOPs of FP32 compute/shading throughput. This is a decent jump over the Polaris cards, but on the surface it doesn't look like a huge, generational jump, and this is where AMD's RDNA architecture comes in. AMD has made numerous optimizations to improve their GPU utilization – that is, how well they put those FLOPs to good use – so a teraflop on a 5700 card means more than it does on preceding AMD cards. Overall, AMD says that they're getting around 25% more work done per clock on the whole in gaming workloads. So raw specs can be deceiving.

The GPUs do not include real-time raytracing or variable rate pixel shading support. These may appear on a future generation of GPUs. Instead, AMD talked about support for DisplayPort 1.4 with Display Stream Compression, a contrast-enhancing post-processing filter, AMD Radeon Image Sharpening, and a Radeon Anti-lag feature to reduce input lag.

Towards the end of the presentation, AMD revealed the 16-core Ryzen 9 3950X, the company's fully-fledged Ryzen CPU with two 8-core "7nm" Zen 2 chiplets. Compared to the 12-core Ryzen 9 3900X CPU, the 3950X has a slightly higher boost clock and L2 cache, with the same 105 Watt TDP, for $749. This is the full lineup so far:

CPUCores / ThreadsFrequencyTDPPrice
Ryzen 9 3950X16 / 323.5 - 4.7 GHz105 W$749
Ryzen 9 3900X12 / 243.8 - 4.6 GHz105 W$499
Ryzen 7 3800X8 / 163.9 - 4.5 GHz105 W$399
Ryzen 7 3700X8 / 163.6 - 4.4 GHz65 W$329
Ryzen 5 3600X6 / 123.8 - 4.4 GHz95 W$249
Ryzen 5 36006 / 123.6 - 4.2 GHz65 W$199

Previously: AMD and Intel at Computex 2019: First Ryzen 3000-Series CPUs and Navi GPU Announced


Original Submission

AMD Cuts Prices of RX 5700 Navi GPUs Two Days Before Release 5 comments

AMD cuts Radeon 5700 GPU prices just two days before their release

When AMD announced its next-gen Navi-based Radeon RX 5700 and 5700 XT graphics cards last month, the news was just slightly underwhelming because the prices didn't necessarily make them the obvious alternative to Nvidia's rival chips.

But just two days before their July 7th launch date, AMD has taken the drastic step of dropping the prices on these new GPUs.

The Radeon 5700 XT, previously listed at $450, will now cost $400, and the Radeon 5700, previously $380, will be priced at $350. (There's also a $500 Radeon RX 5700 XT 50th Anniversary Edition that'll retail for $450.)

That's just super.

Also at Tom's Hardware.

Previously: AMD and Intel at Computex 2019: First Ryzen 3000-Series CPUs and Navi GPU Announced
AMD Details Three Navi GPUs and First Mainstream 16-Core CPU
Nvidia Refreshes RTX 2000-Series GPUs With "Super" Branding


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Wednesday July 03 2019, @11:11PM (4 children)

    by Anonymous Coward on Wednesday July 03 2019, @11:11PM (#862948)

    To me these are all still 2x more expensive than I want to pay to upgrade from a GTX 1060. AMD's are too. Why are GPUs so pricey?

    • (Score: 1) by fustakrakich on Wednesday July 03 2019, @11:18PM

      by fustakrakich (6150) on Wednesday July 03 2019, @11:18PM (#862949) Journal

      Why are GPUs so pricey?

      Because people will pay it

      --
      La politica e i criminali sono la stessa cosa..
    • (Score: 3, Insightful) by takyon on Wednesday July 03 2019, @11:19PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday July 03 2019, @11:19PM (#862951) Journal

      Expensive memory, AMD not providing effective competition, probably some other reasons.

      If Intel jumps in with discrete graphics cards [pcgamesn.com] next year, it will become a three-way and that might help.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 1, Interesting) by Anonymous Coward on Wednesday July 03 2019, @11:27PM

      by Anonymous Coward on Wednesday July 03 2019, @11:27PM (#862958)

      Used to be outsized demand allowed price increases instead of additional production runs. That was during the great cryptocurrency mining rush, before the IRS caught on.

    • (Score: 2) by ledow on Thursday July 04 2019, @11:57AM

      by ledow (5567) on Thursday July 04 2019, @11:57AM (#863106) Homepage

      How often do you buy a GPU lately?

      That's why.

      You're holding off because they are expensive, they are expensive because people are holding off. What you have is good enough, so you're letting it run.

      Pretty much what used to be a "specialised" item that required many upgrades to stay on the cutting edge is now just commodity, and for not-very-much you can have a card that runs all the games you want and can still run games in a few years from now. How often are you then going to upgrade it?

      So they make the commodity stuff cheap, throw it in every PC, and anything more is specialist and they'll charge you for it.

      I bet nobody ever thought they'd make money from GPUs for smartphones and laptops, but almost everyone has one now. That's where their profit lies.

      Your specialist cards are like the GPU usage of CAD/CAM people of old... so niche that they can make you pay through the nose for the extra despite everyone having a commodity 3D card in their computer.

      I'm still running on a nVidia M chip in an 8-year-old gaming laptop. When it dies, I'll upgrade to one that comes with a bigger number and an M. And I probably play more games than anyone else I know.

  • (Score: 0) by Anonymous Coward on Thursday July 04 2019, @06:35AM

    by Anonymous Coward on Thursday July 04 2019, @06:35AM (#863049)

    ...FORTH machine.

  • (Score: 2) by acid andy on Thursday July 04 2019, @11:53AM

    by acid andy (1683) on Thursday July 04 2019, @11:53AM (#863105) Homepage Journal

    Wake me up when "super" goes out of fashion again. It's super irritating.

    --
    Master of the science of the art of the science of art.
  • (Score: 0) by Anonymous Coward on Friday July 05 2019, @03:55AM (1 child)

    by Anonymous Coward on Friday July 05 2019, @03:55AM (#863350)

    In the last year I have had a couple of occasions to know what a good video card would be for upgrading or a new machine.
    I gave up.
    Sure those comparison sites help but the naming and numbering is just whacked.
    Minimum card specs? Comparison across ranges? MX? Super?
    Do I really need card x? Will a cheaper card do? 8gb ram? Shouldn't this be 16 or 32? Wtf.

(1)