Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday September 01 2020, @10:41PM   Printer-friendly
from the amp-up-the-volume-(of-pixels) dept.

Nvidia has announced its latest generation of gaming-oriented GPUs, based on the "Ampere" microarchitecture on a customized Samsung "8nm" process node.

The GeForce RTX 3080 ($700) has 10 GB of GDDR6X VRAM and will be released on September 17. TDP is up significantly, at 320 Watts compared to 215 Watts for the RTX 2080. The GeForce RTX 3070 ($500) has 8 GB of GDDR6 and a TDP of 220 Watts. The GeForce RTX 3090 ($1500) is the top card so far with a whopping 24 GB of GDDR6X VRAM. The GPU is physically much larger than the other two models and it has a 350 Watt TDP.

Nvidia's performance benchmarks should be treated with caution, since the company is often using ray-tracing and/or DLSS upscaling in its comparisons. But the RTX 3070 will outperform the RTX 2080 Ti at less than half the launch price, as it has 35% more CUDA cores at higher clock speeds.

Nvidia also announced some new features such as Nvidia Reflex (4m53s video), Broadcast, Omniverse Machinima, and RTX IO. Nvidia Broadcast includes AI-derived tools intended for live streamers. RTX Voice can filter out background noises, greenscreen effects can be applied without the need for a real greenscreen, and an autoframing feature can keep the streamer centered in frame while they are moving. Nvidia RTX IO appears to be Nvidia's response to the next-generation consoles' use of fast SSDs and dedicated data decompression.

NVIDIA GeForce RTX 30 Series | Official Launch Event (39m29s video)

Previously: Micron Accidentally Confirms GDDR6X Memory, and Nvidia's RTX 3090 GPU


Original Submission

Related Stories

Micron Accidentally Confirms GDDR6X Memory, and Nvidia's RTX 3090 GPU 21 comments

Micron Spills on GDDR6X: PAM4 Signaling For Higher Rates, Coming to NVIDIA's RTX 3090

It would seem that Micron this morning has accidentally spilled the beans on the future of graphics card memory technologies – and outed one of NVIDIA's next-generation RTX video cards in the process. In a technical brief that was posted to their website, dubbed "The Demand for Ultra-Bandwidth Solutions", Micron detailed their portfolio of high-bandwidth memory technologies and the market needs for them. Included in this brief was information on the previously-unannounced GDDR6X memory technology, as well as some information on what seems to be the first card to use it, NVIDIA's GeForce RTX 3090.

[...] At any rate, as this is a market overview rather than a technical deep dive, the details on GDDR6X are slim. The document links to another, still-unpublished document, "Doubling I/O Performance with PAM4: Micron Innovates GDDR6X to Accelerate Graphics Memory", that would presumably contain further details on GDDR6X. None the less, even this high-level overview gives us a basic idea of what Micron has in store for later this year.

The key innovation for GDDR6X appears to be that Micron is moving from using POD135 coding on the memory bus – a binary (two state) coding format – to four state coding in the form of Pulse-Amplitude Modulation 4 (PAM4). In short, Micron would be doubling the number of signal states in the GDDR6X memory bus, allowing it to transmit twice as much data per clock.

[...] According to Micron's brief, they're expecting to get GDDR6X to 21Gbps/pin, at least to start with. This is a far cry from doubling GDDR6's existing 16Gbps/pin rate, but it's also a data rate that would be grounded in the limitations of PAM4 and DRAM. PAM4 itself is easier to achieve than binary coding at the same total data rate, but having to accurately determine four states instead of two is conversely a harder task. So a smaller jump isn't too surprising.

The leaked Ampere-based RTX 3090 seems to be Nvidia's attempt to compete with AMD's upcoming RDNA2 ("Big Navi") GPUs without lowering the price of the usual high-end "Titan" GPU (Titan RTX launched at $2,499). Here are some of the latest leaks for the RTX 30 "Ampere" GPU lineup.

Also at Guru3D and Wccftech.

Previously: GDDR5X Standard Finalized by JEDEC
SK Hynix to Begin Shipping GDDR6 Memory in Early 2018
Samsung Announces Mass Production of GDDR6 SDRAM

Related: PCIe 6.0 Announced for 2021: Doubles Bandwidth Yet Again (uses PAM4)


Original Submission

Nvidia Responds to Reports of Crashing Ampere GPUs Made by Partners 7 comments

NVIDIA's Official Response On GeForce RTX 30 Series Issues: SP-CAP vs MLCC Groupings Vary Depending on Design & Not Indicative of Quality

NVIDIA's GeForce RTX 30 series has been caught up in a major controversy ever since the lineup launched. A botched launch for both RTX 3080 & RTX 3090 graphics cards was soon followed by user reports where several cards were crashing during gaming. It was soon highlighted that the cause of these issues could be related to the GPUs boosting algorithm but more recent reports suggest that the issue could have more to do with the hardware design that AIB[*] partners have implemented on their custom products. NVIDIA has now come forward with an official statement regarding the matter.

[...] In the statement, NVIDIA specifically states that their partner cards are based on custom designs and that they work very closely with them during the whole design/test process. NVIDIA does give AIBs reference specs to follow and gives them certain guidelines for designing customized boards. That does include the limits defined for voltages, power, and clock speeds. NVIDIA goes on to state that there's no specific SP-CAP / MLCC grouping that can be defined for all cards since AIB designs vary compared to each other. But NVIDIA also states that the number of SP-CAP / MLCC groupings are also not indicative of quality.

[...] In our previous report, it was pointed that the GeForce RTX 30 series generally crashed when it hits a certain boost clock above 2.0 GHz. Some users also found out that cards with full SP-CAP layouts (Conductive Polymer Tantalum Solid Capacitors) were generating more issues compared to boards that either use a combination of SP-CAP / MLCCs (Multilayer Ceramic Chip Capacitor) or an entire MLCC design.

[*] AIB: "Add In Board". Cf: Terminology: All graphics cards are AIB.

See also: EVGA Says Nvidia RTX 3080 Cap Issues Caused Crashes, Confirms Stability Issues

Previously: Nvidia Announces RTX 30-Series "Ampere" GPUs


Original Submission

AMD Announces RX 6000 Series "RDNA 2" AKA "Big Navi" GPUs 30 comments

AMD announced its first RDNA 2 (Radeon RX 6000 series) gaming GPUs during a live stream (24m42s) on October 28.

AMD originally planned for RDNA 2 to have 50% more performance per Watt than GPUs using the RDNA 1 microarchitecture. Now, AMD is claiming 54% more performance per Watt for the RX 6800 XT and RX 6800, and 65% more performance per Watt for the RX 6900 XT. Part of the efficiency gain is due to the use of "Infinity Cache", similar to the L3 cache found in Ryzen CPUs. This allowed AMD to use a 256-bit memory bus with 2.17x the effective memory bandwidth of a 384-bit bus, while using slightly less power.

The RX 6900 XT ($1000) has performance comparable to Nvidia's RTX 3090, with a total board power (TBP) of 300 Watts. The RX 6800 XT ($650) is comparable to Nvidia's RTX 3080, also with a 300 Watt TBP. The RX 6800 ($580) is around 18% faster than Nvidia's RTX 2080 Ti, with a 250 Watt TBP. All three of the GPUs have 16 GB of GDDR6 VRAM and 128 MB of "Infinity Cache".

The 6800 XT and 6800 will be available starting on November 18, while the 6900 XT will be available on December 8.

Also at Tom's Hardware, Phoronix, Ars Technica, and Guru3D.

Previously: Nvidia Announces RTX 30-Series "Ampere" GPUs
AMD Announces Zen 3 CPUs


Original Submission

Amazon's "New World" Game Beta Reportedly Bricked Nvidia RTX 3090 GPUs 37 comments

Amazon's New World game is bricking GeForce RTX 3090 graphics cards:

It is probably not a good idea to play New World right now. The closed Beta and Alpha builds of this game have reportedly been a reason for the bricking of GeForce RTX 3090 graphics cards, multiple users on the official game's forum have reported.

The issue appears to affect mainly GeForce RTX 3090 graphics cards which are reportedly overheating and see power spikes. The game has an uncapped framerate in the main menus, which is usually associated with buzzing capacitors. Most users however have reported that EVGA RTX 3090 cards specifically are the most affected brand. A number of the RTX 3090 cards have been bricked in the process.

[...] Update: Amazon Games released the following statement:

Hundreds of thousands of people played in the New World Closed Beta yesterday, with millions of total hours played. We've received a few reports of players using high-performance graphics cards experiencing hardware failure when playing New World.

New World makes standard DirectX calls as provided by the Windows API. We have seen no indication of widespread issues with 3090s, either in the beta or during our many months of alpha testing.

The New World Closed Beta is safe to play. In order to further reassure players, we will implement a patch today that caps frames per second on our menu screen. We're grateful for the support New World is receiving from players around the world, and will keep listening to their feedback throughout Beta and beyond.

New World (video game).

See also: Issues with EVGA RTX 3090 FTW3 Ultra
r/newworldgame - Did the New World Beta brick your gpu?

Related: Micron Accidentally Confirms GDDR6X Memory, and Nvidia's RTX 3090 GPU
Nvidia Announces RTX 30-Series "Ampere" GPUs
Linux Foundation and Partners Announce "Open 3D Foundation"


Original Submission #1Original Submission #2Original Submission #3

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Redundant) by bart9h on Wednesday September 02 2020, @12:00AM (4 children)

    by bart9h (767) on Wednesday September 02 2020, @12:00AM (#1045162)

    I won't even consider buying from you if you don't give a damn about the quality of your drivers for Linux.

    I don't mind to get a little behind in performance, I still prefer to run something with good open source drivers.

    • (Score: 2, Interesting) by fakefuck39 on Wednesday September 02 2020, @12:47AM (3 children)

      by fakefuck39 (6620) on Wednesday September 02 2020, @12:47AM (#1045189)

      You're not the target market. The majority of these are sold for HCI farms, used for things like VDI. Some are put in for server farms hooked up to a SAN, used for financial analysis in addition to VDI. They don't know or care that your Linux box exists. And they run on Linux just fine. Just not for your use case of games or wayland. For compute.

      • (Score: 0) by Anonymous Coward on Wednesday September 02 2020, @05:07PM

        by Anonymous Coward on Wednesday September 02 2020, @05:07PM (#1045481)

        Yes, we know. Disgusting fucking whores use Nvidia.

      • (Score: 0) by Anonymous Coward on Wednesday September 02 2020, @05:14PM (1 child)

        by Anonymous Coward on Wednesday September 02 2020, @05:14PM (#1045488)

        Eventually Linux will take over those markets, and we will all remember who Nvidia really is.

        • (Score: 1) by fakefuck39 on Friday September 04 2020, @12:37AM

          by fakefuck39 (6620) on Friday September 04 2020, @12:37AM (#1046133)

          me: "they run on Linux just fine"
          you: "Eventually Linux will take over those markets"

          the stupid is very strong here. very strong.

  • (Score: 0, Redundant) by Anonymous Coward on Wednesday September 02 2020, @12:03AM (1 child)

    by Anonymous Coward on Wednesday September 02 2020, @12:03AM (#1045164)

    Huge, expensive, high performance, doubles as a space heater. Looks like they're worried about AMD again.

    Like the above poster, I won't consider nVidia as long as there are no decent free drivers.

    • (Score: 2) by takyon on Wednesday September 02 2020, @12:06AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday September 02 2020, @12:06AM (#1045166) Journal

      AMD will at least release something that can compete with the RTX 3080. They will start leaking things this month, make an announcement by October, and launch by November.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 1, Interesting) by Anonymous Coward on Wednesday September 02 2020, @12:05AM (2 children)

    by Anonymous Coward on Wednesday September 02 2020, @12:05AM (#1045165)
    • (Score: 5, Informative) by takyon on Wednesday September 02 2020, @12:14AM (1 child)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday September 02 2020, @12:14AM (#1045174) Journal

      Turing/RTX 1st-gen was an expensive ray-tracing beta test that used marketing to mask a meager performance uplift and obvious price uplift.

      Ampere will actually be good, for people who are willing to buy Nvidia (they are disproportionately hated here). It will also make ray-tracing more viable. AMD will also introduce hardware-accelerated ray-tracing this year in both its RDNA2 "Big Navi" GPUs and the next-gen consoles.

      *USED* RTX 2080 Ti cards were selling for $1,000-$1,300 in recent weeks, and are now selling below $500 [wccftech.com] instantly following this announcement. There should be some funny stories/gloating about that.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by Bot on Wednesday September 02 2020, @07:07AM

        by Bot (3902) on Wednesday September 02 2020, @07:07AM (#1045286) Journal

        >Nvidia (they are disproportionately hated here)
        yep I concur we should hate 'em more.

        --
        Account abandoned.
  • (Score: 4, Funny) by Anonymous Coward on Wednesday September 02 2020, @01:37AM (3 children)

    by Anonymous Coward on Wednesday September 02 2020, @01:37AM (#1045212)

    Because that is what you will be feeding them.

    • (Score: 2) by takyon on Wednesday September 02 2020, @02:10AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday September 02 2020, @02:10AM (#1045219) Journal

      Note that custom and highly overclocked versions of the RTX 3090 could use more like 400 Watts.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 3, Interesting) by c0lo on Wednesday September 02 2020, @08:35AM (1 child)

      by c0lo (156) Subscriber Badge on Wednesday September 02 2020, @08:35AM (#1045304) Journal

      At 3.3V, 350W means about 110 of them Amperes.
      At 12V, only 30.

      As a heuristic, a MIG welder needs about 1A for each 0.025mm metal thickness.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 2) by EvilSS on Wednesday September 02 2020, @09:11PM

        by EvilSS (1456) Subscriber Badge on Wednesday September 02 2020, @09:11PM (#1045599)
        At 3.5mV, that means 100,000 Amperes. That's gonna need one thick trickle dick of a wire!
(1)