Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday September 21 2018, @11:58PM   Printer-friendly
from the Wait-until-version-2.0 dept.

Nvidia's Turing pricing strategy has been 'poorly received,' says Instinet

Instinet analyst Romit Shah commented Friday on Nvidia Corp.'s new Turing GPU, now that reviews of the product are out. "The 2080 TI is indisputably the best consumer GPU technology available, but at a prohibitive cost for many gamers," he wrote. "Ray tracing and DLSS [deep learning super sampling], while apparently compelling features, are today just 'call options' for when game developers create content that this technology can support."

Nvidia shares fall after Morgan Stanley says the performance of its new gaming card is disappointing

"As review embargos broke for the new gaming products, performance improvements in older games is not the leap we had initially hoped for," Morgan Stanley analyst Joseph Moore said in a note to clients on Thursday. "Performance boost on older games that do not incorporate advanced features is somewhat below our initial expectations, and review recommendations are mixed given higher price points." Nvidia shares closed down 2.1 percent Thursday.

Moore noted that Nvidia's new RTX 2080 card performed only 3 percent better than the previous generation's 1080Ti card at 4K resolutions.

And a counterpoint:

Morgan Stanley's Failure To Comprehend Bleeding Edge GPU Tech Results In NVIDIA Downgrade

Morgan Stanley appears to have completely missed the point with NVIDIA's bleeding edge RTX series, treating it with such a tone-deaf rigor and an apparent lack of understanding of the underlying tech involved, that is almost impressive. They reached a "disappointed" conclusion based on conventional performance of an unconventional product, which wouldn't in itself be so bad if it weren't for the fact that 2/3rds of the RTX's value proposition, which includes conventional-performance-enhancing-features, isn't even available yet. But then again, these are the same peeps that gave AMD a price target of $11 before drastically revising their estimates – so maybe it's not that bad an analysis.

Analysts at Morgan Stanley appear to have access to a crystal ball, because while most of us are waiting for NVIDIA to get its act together and give us our promised titles with RTX and DLSS support (so we can judge whether said features are worth the money being asked) they have simply consulted this coveted spherical mirror and formed conclusions already, deeming it unworthy of the market. It's only a pity this mirror didn't help them with forecasting AMD.

See also: Nvidia's Botched RTX 20 Series Launch: 'You Can't Benchmark Goals'

Previously: Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
10 Reasons Linux Gamers Might Want To Pass On The NVIDIA RTX 20 Series


Original Submission

Related Stories

Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations 8 comments

NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest Turing parts can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

The Turing architecture also carries over the tensor cores from Volta, and indeed these have even been enhanced over Volta. The tensor cores are an important aspect of multiple NVIDIA initiatives. Along with speeding up ray tracing itself, NVIDIA's other tool in their bag of tricks is to reduce the amount of rays required in a scene by using AI denoising to clean up an image, which is something the tensor cores excel at. Of course that's not the only feature tensor cores are for – NVIDIA's entire AI/neural networking empire is all but built on them – so while not a primary focus for the SIGGRAPH crowd, this also confirms that NVIDIA's most powerful neural networking hardware will be coming to a wider range of GPUs.

New to Turing is support for a wider range of precisions, and as such the potential for significant speedups in workloads that don't require high precisions. On top of Volta's FP16 precision mode, Turing's tensor cores also support INT8 and even INT4 precisions. These are 2x and 4x faster than FP16 respectively, and while NVIDIA's presentation doesn't dive too deep here, I would imagine they're doing something similar to the data packing they use for low-precision operations on the CUDA cores. And without going too deep ourselves here, while reducing the precision of a neural network has diminishing returns – by INT4 we're down to a total of just 16(!) values – there are certain models that really can get away with this very low level of precision. And as a result the lower precision modes, while not always useful, will undoubtedly make some users quite happy at the throughput, especially in inferencing tasks.

Also of note is the introduction of GDDR6 into some GPUs. The NVIDIA Quadro RTX 8000 will come with 24 GB of GDDR6 memory and a total memory bandwidth of 672 GB/s, which compares favorably to previous-generation GPUs featuring High Bandwidth Memory. Turing supports the recently announced VirtualLink. The video encoder block has been updated to include support for 8K H.265/HEVC encoding.

Ray-tracing combined with various (4m27s video) shortcuts (4m16s video) could be used for good-looking results in real time.

Also at Engadget, Notebookcheck, and The Verge.

See also: What is Ray Tracing and Why Do You Want it in Your GPU?


Original Submission

Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance 23 comments

NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October

NVIDIA's Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA's GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA's new Turing GPU architecture and built on TSMC's 12nm "FFN" process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA's most important GPU architecture since 2006's Tesla GPU architecture (G80 GPU), and from a features standpoint it's clear that he's not overstating matters.

[...] So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA's RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

Nvidia has confirmed that the machine learning capabilities (tensor cores) of the GPU will used to smooth out problems with ray-tracing. Real-time AI denoising (4m17s) will be used to reduce the amount of samples per pixel needed to achieve photorealism.

Previously: Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations

Related: Real-time Ray-tracing at GDC 2014


Original Submission

10 Reasons Linux Gamers Might Want To Pass On The NVIDIA RTX 20 Series 32 comments

Submitted via IRC for takyon

Continuing on from the NVIDIA GeForce RTX 2080 expectations on Linux shared earlier this week, here's a list of ten reasons why Linux gamers might want to pass on these soon-to-launch graphics cards from NVIDIA.

The list are various reasons you may want to think twice on these graphics cards -- at least not for pre-ordering any of them right away. Not all of them are specific to the Turing GPUs per se but also some NVIDIA Linux infrastructure problems or general Linux gaming challenges, but here's the list for those curious. And, yes, a list is coming out soon with reasons Linux users may want to consider the RTX 20 series -- well, mostly for developers / content creators it may make sense.

Here is the list:

  1. Lack of open-source driver support
  2. It will be a while before seeing RTX/ray-tracing Linux games
  3. Turing appears to be a fairly incremental upgrade outside of RTX
  4. The GeForce GTX 1080 series already runs very well
  5. Poor Wayland support
  6. The Linux driver support for Turing is unclear
  7. These graphics cards are incredibly expensive
  8. SLI is next to worthless on Linux
  9. VR Linux support is still in rough shape
  10. Pascal prices will almost surely drop

That's the quick list outside of my detailed pre-launch Linux analysis. A similar list of the pros for the RTX 20 series on Linux will be coming out shortly. It will certainly be interesting to see after 20 September how the NVIDIA GeForce RTX 20 series works on Linux.

Source: https://www.phoronix.com/scan.php?page=news_item&px=10-Reasons-Pass-RTX-20-Linux

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance


Original Submission

Nvidia Announces Titan RTX 14 comments

Nvidia has announced its $2,500 Turing-based Titan RTX GPU. It is said to have a single precision performance of 16.3 teraflops and "tensor performance" of 130 teraflops. Double precision performance has been neutered down to 0.51 teraflops, down from 6.9 teraflops for last year's Volta-based Titan V.

The card includes 24 gigabytes of GDDR6 VRAM clocked at 14 Gbps, for a total memory bandwidth of 672 GB/s.

Drilling a bit deeper, there are really three legs to Titan RTX that sets it apart from NVIDIA's other cards, particularly the GeForce RTX 2080 Ti. Raw performance is certainly once of those; we're looking at about 15% better performance in shading, texturing, and compute, and around a 9% bump in memory bandwidth and pixel throughput.

However arguably the lynchpin to NVIDIA's true desired market of data scientists and other compute users is the tensor cores. Present on all NVIDIA's Turing cards and the heart and soul of NVIIDA's success in the AI/neural networking field, NVIDIA gave the GeForce cards a singular limitation that is none the less very important to the professional market. In their highest-precision FP16 mode, Turing is capable of accumulating at FP32 for greater precision; however on the GeForce cards this operation is limited to half-speed throughput. This limitation has been removed for the Titan RTX, and as a result it's capable of full-speed FP32 accumulation throughput on its tensor cores.

Given that NVIDIA's tensor cores have nearly a dozen modes, this may seem like an odd distinction to make between the GeForce and the Titan. However for data scientists it's quite important; FP32 accumulate is frequently necessary for neural network training – FP16 accumulate doesn't have enough precision – especially in the big money fields that will shell out for cards like the Titan and the Tesla. So this small change is a big part of the value proposition to data scientists, as NVIDIA does not offer a cheaper card with the chart-topping 130 TFLOPS of tensor performance that Titan RTX can hit.

Previously: More Extreme in Every Way: The New Titan Is Here – NVIDIA TITAN Xp
Nvidia Announces Titan V
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Nvidia's Turing GPU Pricing and Performance "Poorly Received"


Original Submission

Facebook Researchers Show Off Machine Learning-Based Upsampling Technique 10 comments

Neural SuperSampling Is a Hardware Agnostic DLSS Alternative by Facebook

A new paper published by Facebook researchers just ahead of SIGGRAPH 2020 introduces neural supersampling, a machine learning-based upsampling approach not too dissimilar from NVIDIA's Deep Learning Super Sampling. However, neural supersampling does not require any proprietary hardware or software to run and its results are quite impressive as you can see in the example images, with researchers comparing them to the quality we've come to expect from DLSS.

Video examples on Facebook's blog post.

The researchers use some extremely low-fi upscales to make their point, but you could also imagine scaling from a resolution like 1080p straight to 8K. Upscaling could be combined with eye tracking and foveated rendering to reduce rendering times even further.

Also at UploadVR and VentureBeat.

Journal Reference:
Lei Xiao, Salah Nouri, Matt Chapman, Alexander Fix, Douglas Lanman, Anton Kaplanyan,Neural Supersampling for Real-time Rendering - Facebook Research, (DOI: https://research.fb.com/publications/neural-supersampling-for-real-time-rendering/)

Related: With Google's RAISR, Images Can be Up to 75% Smaller Without Losing Detail
Nvidia's Turing GPU Pricing and Performance "Poorly Received"
HD Emulation Mod Makes "Mode 7" SNES Games Look Like New
Neural Networks Upscale Film From 1896 to 4K, Make It Look Like It Was Shot on a Modern Smartphone
Apple Goes on an Acquisition Spree, Turns Attention to NextVR


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Saturday September 22 2018, @12:09AM (5 children)

    by Anonymous Coward on Saturday September 22 2018, @12:09AM (#738413)

    People are getting fed up with the shady games of Intel and Nvidia. It also seems both companies may have been prioritizing "shady games" over producing good products. So there is a great opportunity for competition here, which seems to be only AMD at the moment. Who else is there?

    • (Score: 2) by takyon on Saturday September 22 2018, @12:21AM (1 child)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday September 22 2018, @12:21AM (#738416) Journal

      Smartphone games, ARM + Mali/Adreno/PowerVR/etc. GPUs.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Saturday September 22 2018, @01:01AM

        by Anonymous Coward on Saturday September 22 2018, @01:01AM (#738431)

        I get the rest of the post, but what do you mean by "GPUs" at the end?

    • (Score: 2) by bob_super on Saturday September 22 2018, @12:35AM (2 children)

      by bob_super (1357) on Saturday September 22 2018, @12:35AM (#738422)

      When your previous card almost pushed 4K60, and your incremental card is going to push 4K60, and your competitor's cards will soon push 4k60, you gotta think of new ways to compel people to drop the cash that cryptocurrency people aren't anymore, while crossing fingers that VR will miraculously become popular soon.

      • (Score: 0) by Anonymous Coward on Saturday September 22 2018, @01:05AM (1 child)

        by Anonymous Coward on Saturday September 22 2018, @01:05AM (#738434)

        I think programming the GPU just needs to become more accessible. There are so many times I would want to send a loop to thousands of (not even that much) slower processors than a few fast ones.

        • (Score: 0) by Anonymous Coward on Sunday September 23 2018, @04:05AM

          by Anonymous Coward on Sunday September 23 2018, @04:05AM (#738758)

          I think this is exactly what they don't want you to do. If they can charge you twice the price for something that's only 10 percent faster why should they let you simply purchase two of something that's ten percent slower for the same price and get nearly twice the speed?

  • (Score: 2) by Snotnose on Saturday September 22 2018, @12:33AM (5 children)

    by Snotnose (1623) on Saturday September 22 2018, @12:33AM (#738419)

    It's gonna take a couple years for games to catch up to these cards (if you aren't a gamer you're one of the 1 percenters who have real work to do). I predict that in 2 years A) The card will cost half as much, if not less; and B) there will be a game out that uses the full capabilities of the card.

    I remember, what, 20 years ago? I built a top of the line gaming box and put Half Life on it. There was a scene where I was crossing a bridge, looking down I was dumbfounded by how realistic it was.

    --
    When the dust settled America realized it was saved by a porn star.
    • (Score: 3, Informative) by loonycyborg on Saturday September 22 2018, @03:06PM (4 children)

      by loonycyborg (6905) on Saturday September 22 2018, @03:06PM (#738551)

      GPU power doesn't directly transfer to realism. Nowadays it's limited by performance of artists and level designers, not GPU power. And chasing for realism is detrimental to gameplay since too complex scenes may create constraints on other aspects of the game's design and just plain create more workload for designers, resulting in less game for you. This new model is about as performant as its predecessors and its only selling point is realtime raytracing. It's really nifty idea but there's no guarantee that games will make any use of it ever or that it will be usable in practice. nvidia has clearly hit the physical limits of what is usefully can be done yet has to push out at least something to create illusion of breakneck progress like before.

      • (Score: 2) by Snotnose on Saturday September 22 2018, @03:33PM (3 children)

        by Snotnose (1623) on Saturday September 22 2018, @03:33PM (#738561)

        I think GTA V balanced gameplay and realism perfectly. Stealing a helicopter and flying over the city is magical.

        Skyrim did a pretty good job too.

        --
        When the dust settled America realized it was saved by a porn star.
        • (Score: 2) by loonycyborg on Saturday September 22 2018, @04:13PM (2 children)

          by loonycyborg (6905) on Saturday September 22 2018, @04:13PM (#738576)

          Among those two I only played Skyrim, and to me its gameplay systems seemed like mostly an unfun chore. I always treat rpg rulesets as a fun puzzle to learn and solve and skyrim and pretty much all of other such recent high detail games simply fail to provide a fun puzzle to me. Sure, you could fly a dragon but tactical uses of this as implemented in game aren't pretty fun due to simplistic ruleset. So from my point of view balance of gameplay vs realism is exactly what games failed to maintain since gameplay keeps deteriorating with each year in general as opposed to improvement that I hoped for. It seems gameplay is as important to game designers as plot is for porn makers.

          • (Score: 2) by cubancigar11 on Saturday September 22 2018, @04:38PM (1 child)

            by cubancigar11 (330) on Saturday September 22 2018, @04:38PM (#738583) Homepage Journal

            Skyrim is a poor example to use when talking about gameplay because skyrim and fallout 4 are about immersion. This aspect has been honed to such perfection in skyrim (and not reproduced in fallout 4) that bethesda is still pushing skyrim to any new console and modding scene is still as intense as it used to be.

            This will not fit well into PC gaming but look at Uncharted 4 for the right balance between realism and gameplay.

            • (Score: 2) by Freeman on Monday September 24 2018, @04:23PM

              by Freeman (732) on Monday September 24 2018, @04:23PM (#739231) Journal

              Really, if they had pushed the whole building your own base in various locations with Fallout 4. I would have been on board from the start. Instead, I was thrown off by somewhat pessimistic views on a preview for the game. Really, the game I got, when I eventually got it, was as good as or better than Fallout 3 or New Vegas. The main issue is that the Fallout 4 story line just wasn't as interesting to me as the previous games. I also got it on VR, but it definitely suffers from the whole moving while you're not moving motion sickness. It's great fun to be able to spin around and see the whole of the wastelands, to be able to duck by ducking, to do all the cool VR stuff in the Fallout Universe. The motion sickness really hurts it for you. Especially, if you're prone to motion sickness. There are games and types of games that just work wonderfully with VR. The immersive FPS genre, kind of tough.

              Now, if you removed the "issue" of motion sickness by having my character actually walk when I walk, that would be something. Though, that would pose it's own physical challenges. Like needing specialized hardware that lets walk in place, turn, spin, crouch, and jump without actually going anywhere. Like a super treadmill that is powered by you.

              There may be ways to get around some of those motion sickness issues via software, but I think it will always be somewhat disorienting to be moving while not moving.

              --
              Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 3, Insightful) by MichaelDavidCrawford on Saturday September 22 2018, @12:34AM

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Saturday September 22 2018, @12:34AM (#738421) Homepage Journal

    ... how profitably will it mine ASIC-Resistant Crypto?

    --
    Yes I Have No Bananas. [gofundme.com]
  • (Score: 0) by Anonymous Coward on Saturday September 22 2018, @12:48AM

    by Anonymous Coward on Saturday September 22 2018, @12:48AM (#738426)

    if you want cross platform gaming.

  • (Score: 2) by RamiK on Saturday September 22 2018, @12:53AM (5 children)

    by RamiK (1813) on Saturday September 22 2018, @12:53AM (#738427)

    https://store.steampowered.com/hwsurvey/videocard/ [steampowered.com]

    Though admittedly, it's priced high enough for people to say "I'd get one for Christmas or when games come out".

    --
    compiling...
    • (Score: 2) by takyon on Saturday September 22 2018, @01:33AM (2 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday September 22 2018, @01:33AM (#738438) Journal

      Well it's been out for just days, and how do we know their thing properly detects these GPUs?

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Saturday September 22 2018, @02:11AM

        by Anonymous Coward on Saturday September 22 2018, @02:11AM (#738445)

        Well, "we" don't know. But I think it's safe to say an analyst that does this for a living could compare sales figures from some outlets to previous cards and reach a decent raw estimate.

        As for their "thing": https://docs.microsoft.com/en-us/windows/desktop/api/winuser/nf-winuser-enumdisplaydevicesa [microsoft.com]

        As for the Steam link, Valve exposes APIs to their db so you can look up popular cards from yesteryear and see how their launches turned out in the first few months.

      • (Score: 2) by RamiK on Saturday September 22 2018, @05:19PM

        by RamiK (1813) on Saturday September 22 2018, @05:19PM (#738598)

        Yeah not sure why it didin't occur to me. But the other post has a point about an analyst being able to look it up by checking it with stores contacts and such... I suppose we'll know for sure in a couple of month. Either way, AMD should have something out this year so we'll come back to it by then :D

        --
        compiling...
    • (Score: 1, Informative) by Anonymous Coward on Saturday September 22 2018, @03:06PM (1 child)

      by Anonymous Coward on Saturday September 22 2018, @03:06PM (#738550)

      The most recent dataset exposed by the Steam HW Survey is August, the RTX 2080 came out this week, it's not going to be shown yet no matter how many people own one.

      • (Score: 2) by RamiK on Saturday September 22 2018, @05:15PM

        by RamiK (1813) on Saturday September 22 2018, @05:15PM (#738596)

        Oh right. I guess I'll look it up again next time this discussion comes around.

        --
        compiling...
(1)