Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday May 07 2016, @04:27PM   Printer-friendly
from the real-cards-for-virtual-worlds dept.

Nvidia revealed key details about its upcoming "Pascal" consumer GPUs at a May 6th event. These GPUs are built using a 16nm FinFET process from TSMC rather than the 28nm processes that were used for several previous generations of both Nvidia and AMD GPUs.

The GeForce GTX 1080 will outperform the GTX 980, GTX 980 Ti, and Titan X cards. Nvidia claims that GTX 1080 can reach 9 teraflops of single precision performance, while the GTX 1070 will reach 6.5 teraflops. A single GTX 1080 will be faster than two GTX 980s in SLI.

Both the GTX 1080 and 1070 will feature 8 GB of VRAM. Unfortunately, neither card contains High Bandwidth Memory 2.0 like the Tesla P100 does. Instead, the GTX 1080 has GDDR5X memory while the 1070 is sticking with GDDR5.

The GTX 1080 starts at $599 and is available on May 27th. The GTX 1070 starts at $379 on June 10th. Your move, AMD.


Original Submission

Related Stories

GDDR5X Standard Finalized by JEDEC 17 comments

JEDEC has finalized the GDDR5X SGRAM specification:

The new technology is designed to improve bandwidth available to high-performance graphics processing units without fundamentally changing the memory architecture of graphics cards or memory technology itself, similar to other generations of GDDR, although these new specifications are arguably pushing the phyiscal[sic] limits of the technology and hardware in its current form. The GDDR5X SGRAM (synchronous graphics random access memory) standard is based on the GDDR5 technology introduced in 2007 and first used in 2008. The GDDR5X standard brings three key improvements to the well-established GDDR5: it increases data-rates by up to a factor of two, it improves energy efficiency of high-end memory, and it defines new capacities of memory chips to enable denser memory configurations of add-in graphics boards or other devices. What is very important for developers of chips and makers of graphics cards is that the GDDR5X should not require drastic changes to designs of graphics cards, and the general feature-set of GDDR5 remains unchanged (and hence why it is not being called GDDR6).

[...] The key improvement of the GDDR5X standard compared to the predecessor is its all-new 16n prefetch architecture, which enables up to 512 bit (64 Bytes) per array read or write access. By contrast, the GDDR5 technology features 8n prefetch architecture and can read or write up to 256 bit (32 Bytes) of data per cycle. Doubled prefetch and increased data transfer rates are expected to double effective memory bandwidth of GDDR5X sub-systems. However, actual performance of graphics cards will depend not just on DRAM architecture and frequencies, but also on memory controllers and applications. Therefore, we will need to test actual hardware to find out actual real-world benefits of the new memory.

What purpose does GDDR5X serve if superior 1st and 2nd generation High Bandwidth Memory (HBM) are around? GDDR5X memory will be cheaper than HBM and its use is more of an evolutionary than revolutionary change from existing GDDR5-based hardware.


Original Submission

Nvidia Announces Tesla P100, the First Pascal GPU 17 comments

During a keynote at GTC 2016, Nvidia announced the Tesla P100, a 16nm FinFET Pascal graphics processing unit with 15.3 billion transistors intended for high performance and cloud computing customers. The GPU includes 16 GB of High Bandwidth Memory 2.0 with 720 GB/s of memory bandwidth and a unified memory architecture. It also uses the proprietary NVLink, an interconnect with 160 GB/s of bandwidth, rather than the slower PCI-Express.

Nvidia claims the Tesla P100 will reach 5.3 teraflops of FP64 (double precision) performance, along with 10.6 teraflops of FP32 and 21.2 teraflops of FP16. 3584 of a maximum possible 3840 stream processors are enabled on this version of the GP100 die.

At the keynote, Nvidia also announced a 170 teraflops (FP16) "deep learning supercomputer" or "datacenter in a box" called DGX-1. It contains eight Tesla P100s and will cost $129,000. The first units will be going to research institutions such as the Massachusetts General Hospital.


Original Submission

Intel and AMD News From Computex 2016 22 comments

A lot of CPU news is coming out of Computex 2016.

Intel has launched its new Broadwell-E "Extreme Edition" CPUs for "enthusiasts". The top-of-the-line model, the i7-6950X, now includes 10 cores instead of 8, but the price has increased massively to around $1,723. Compare this to a ~$999 launch price for the 8-core i7-5960X or 6-core i7-4960X flagships from previous generations.

Intel has also launched some new Skylake-based Xeons with "Iris Pro" graphics.

AMD revealed more details about the Radeon RX 480, a 14nm "Polaris" GPU that will be priced at $199 and released on June 29th. AMD intends to compete for the budget/mainstream gamer segment falling far short of the $379 launch price of a GTX 1070, while delivering around 70-75% of the performance. It also claims that the RX 480 will perform well enough to allow more gamers to use premium virtual reality headsets like the Oculus Rift or HTC Vive.

While 14nm AMD "Zen" desktop chips should be coming later this year, laptop/2-in-1/tablet users will have to settle for the 7th generation Bristol Ridge and Stoney Ridge APUs. They are still 28nm "Excavator" based chips with "modules" instead of cores.


Original Submission

Nvidia Releases the GeForce GTX 1080 Ti: 11.3 TFLOPS of FP32 Performance 5 comments

NVIDIA is releasing the GeForce GTX 1080 Ti, a $699 GPU with performance and specifications similar to that of the NVIDIA Titan X:

Unveiled last week at GDC and launching [March 10th] is the GeForce GTX 1080 Ti. Based on NVIDIA's GP102 GPU – aka Bigger Pascal – the job of GTX 1080 Ti is to serve as a mid-cycle refresh of the GeForce 10 series. Like the GTX 980 Ti and GTX 780 Ti before it, that means taking advantage of improved manufacturing yields and reduced costs to push out a bigger, more powerful GPU to drive this year's flagship video card. And, for NVIDIA and their well-executed dominance of the high-end video card market, it's a chance to run up the score even more. With the GTX 1080 Ti, NVIDIA is aiming for what they're calling their greatest performance jump yet for a modern Ti product – around 35% on average. This would translate into a sizable upgrade for GeForce GTX 980 Ti owners and others for whom GTX 1080 wasn't the card they were looking for.

[...] Going by the numbers then, the GTX 1080 Ti offers just over 11.3 TFLOPS of FP32 performance. This puts the expected shader/texture performance of the card 28% ahead of the current GTX 1080, while the ROP throughput advantage stands 26%, and memory bandwidth at a much greater 51.2%. Real-world performance will of course be influenced by a blend of these factors, with memory bandwidth being the real wildcard. Otherwise, relative to the NVIDIA Titan X, the two cards should end up quite close, trading blows now and then.

Speaking of the Titan, on an interesting side note, NVIDIA isn't going to be doing anything to hurt the compute performance of the GTX 1080 Ti to differentiate the card from the Titan, which has proven popular with GPU compute customers. Crucially, this means that the GTX 1080 Ti gets the same 4:1 INT8 performance ratio of the Titan, which is critical to the cards' high neural networking inference performance. As a result the GTX 1080 Ti actually has slightly greater compute performance (on paper) than the Titan. And NVIDIA has been surprisingly candid in admitting that unless compute customers need the last 1GB of VRAM offered by the Titan, they're likely going to buy the GTX 1080 Ti instead.

The card includes 11 GB of Micron's second-generation GDDR5X memory operating at 11 Gbps compared to 12 GB of GDDR5X at 10 Gbps for the Titan X.

Previously: GDDR5X Standard Finalized by JEDEC
Nvidia Announces Tesla P100, the First Pascal GPU
Nvidia Unveils GTX 1080 and 1070 "Pascal" GPUs


Original Submission

AMD's Vega 64: Matches the GTX 1080 but Not in Power Consumption 17 comments

AMD's new Vega 64 GPU offers comparable performance at a similar price to Nvidia's GTX 1080, which was released over a year ago. But it does so while consuming a lot more power under load (over 100 Watts more). Vega 56, however, runs faster than the GTX 1070 at a slightly lower price:

So how does AMD fare? The answer to that is ultimately going to hinge on your option on power efficiency. But before we get too far, let's start with the Radeon RX Vega 64, AMD's flagship card. Previously we've been told that it would trade blows with NVIDIA's GeForce GTX 1080, and indeed it does just that. At 3840x2160, the Vega 64 is on average neck-and-neck with the GeForce GTX 1080 in gaming performance, with the two cards routinely trading the lead, and AMD holding it more often. Of course the "anything but identical" principle applies here, as while the cards are equal on average, they can sometimes be quite far apart on individual games.

Unfortunately for AMD, their GTX 1080-like performance doesn't come cheap from a power perspective. The Vega 64 has a board power rating of 295W, and it lives up to that rating. Relative to the GeForce GTX 1080, we've seen power measurements at the wall anywhere between 110W and 150W higher than the GeForce GTX 1080, all for the same performance. Thankfully for AMD, buyers are focused on price and performance first and foremost (and in that order), so if all you're looking for is a fast AMD card at a reasonable price, the Vega 64 delivers where it needs to: it is a solid AMD counterpart to the GeForce GTX 1080. However if you care about the power consumption and the heat generated by your GPU, the Vega 64 is in a very rough spot.

On the other hand, the Radeon RX Vega 56 looks better for AMD, so it's easy to see why in recent days they have shifted their promotional efforts to the cheaper member of the RX Vega family. Though a step down from the RX Vega 64, the Vega 56 delivers around 90% of Vega 64's performance for 80% of the price. Furthermore, when compared head-to-head with the GeForce GTX 1070, its closest competition, the Vega 56 enjoys a small but none the less significant 8% performance advantage over its NVIDIA counterpart. Whereas the Vega 64 could only draw to a tie, the Vega 56 can win in its market segment.

[...] The one wildcard here with the RX Vega 56 is going to be where retail prices actually end up. AMD's $399 MSRP is rather aggressive, especially when GTX 1070 cards are retailing for closer to $449 due to cryptocurrency miner demand. If they can sustain that price, then Vega 56 is going to be real hot stuff, besting GTX 1070 in price and performance. Otherwise at GTX 1070-like prices it still has the performance advantage, but not the initiative on pricing. At any rate, this is a question we can't answer today; the Vega 56 won't be launching for another two weeks.

Both the Vega 64 and Vega 56 include 8 GB of HBM2 memory.

Also at Tom's Hardware.

Previously: AMD Unveils the Radeon Vega Frontier Edition
AMD Launches the Radeon Vega Frontier Edition
AMD Radeon RX Vega 64 and 56 Announced


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by mrcoolbp on Saturday May 07 2016, @05:25PM

    by mrcoolbp (68) <mrcoolbp@soylentnews.org> on Saturday May 07 2016, @05:25PM (#342940) Homepage

    I'm not super well-versed on GPU hardware but I know these things are designed with VR in mind. They make advances in rendering 2 screens at once (one for each eye) for example. Currently hitting 90fps (per eye) in VR mean compromising on visual fidelity elsewhere, this push will help bring VR experiences (texture resolution, detail, etc.) more in-line with what gamers expect these days.

    Anyone planning on building a VR PC this year will certainly want to look into pascal, though many are saying to wait for a "1080ti" version.

    --
    (Score:1^½, Radical)
    • (Score: 2) by takyon on Saturday May 07 2016, @05:45PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday May 07 2016, @05:45PM (#342943) Journal

      Hmm, can the GTX 1080 Ti even exceed the performance of the Tesla P100? That does 10.6 teraflops of FP32, and GTX 1080 is already at 9 teraflops. There doesn't seem to be much room for improvement unless I'm getting something terribly wrong.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Saturday May 07 2016, @06:25PM

        by Anonymous Coward on Saturday May 07 2016, @06:25PM (#342954)

        At this moment they are both 'paper specs'. Wait a couple of months once the reviews are out then make a decision.

        I am looking forward to what both can do in the mobile ranges. It has been about 5 years since my last laptop purchase. So it should be a nice bump in perf.

        • (Score: 2) by takyon on Saturday May 07 2016, @06:30PM

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday May 07 2016, @06:30PM (#342955) Journal

          I am excited for AMD Zen in general and perhaps graphics performance boost on mobile (I have not seen any talk of the Zen iGPU, just the IPC/cluster improvements). Skylake improved iGPU but the best GT4e parts are scarce/expensive.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by tibman on Sunday May 08 2016, @06:33AM

            by tibman (134) Subscriber Badge on Sunday May 08 2016, @06:33AM (#343120)

            I am also excited : ) Rejecting "upgrade impulses" and saving up money for a totally new build. Hopefully we'll see some initial Zen reviews in six months?

            --
            SN won't survive on lurkers alone. Write comments.
      • (Score: 5, Informative) by gman003 on Sunday May 08 2016, @12:24AM

        by gman003 (4155) on Sunday May 08 2016, @12:24AM (#343050)

        The GTX 1080 Ti will likely be a near-match for the Tesla P100 for rendering performance, lacking only in FP64 performance and maybe memory capacity (it may be an 8GB or 12GB card instead of the 16GB Tesla).

        (takyon, I'm going to cover a lot of basic stuff just to provide context for others, I expect you probably know most of this)

        Nvidia, like most high-performance processor companies, relies heavily on the practice of binning. They produce only a few lines of actual, distinct chips, and they use the semi-broken chips that inevitably come out of the fabrication process to fill out their lineup. If the chip is laid out with 32 cores, and one to four of them are broken, they'll blow some fuses built into it to completely disable four cores, and sell it as a cheaper 28-core processor.

        Generally, they produce about different chips. Their model numbers for the actual GPU chips consists of two letters and three numbers. First is always a G, then comes a letter indicating the architecture (F for Fermi, K for Kepler, M for Maxwell, and now P for Pascal). The first digit is a broad series number, sort of a sub-architecture, starting with 1 and to my knowledge never exceeding 2. Then comes two digits for which model it is in their lineup - usually the 00 is the first and initially biggest one, with a 04 and a 06 being successively smaller, and most of the time followed by a super-big 10 model once the fabrication process can handle it.

        The Tesla P100 uses the GP100 chip, with 56 of the 60 "cores" (Shader Modules in Nvidia parlance - they aren't exactly the same beast as a CPU core but it's the same general idea) enabled. That's pretty much expected - this is the first 16nm GPU, and it's being made at the same size as the biggest of the old 28nm chips. It's simply impossible for them to be producing perfect 60-core chips right off the bat.

        The 1080 and 1070 both use the GP104 chip, which we don't yet know how many cores it's being fabricated with. We do know that the 1080 has forty of them enabled. We don't yet know how many are enabled on the 1070, although I'd expect 30-36, based on previous generations.

        Also based on previous generations, I can make some guesses about two other cards in the expected lineup. The 1080 Ti will almost certainly use the same GP100 chip, although with several cores disabled, possibly the same 56-core as the Tesla P100, possibly fewer. There will likely also be a new Titan card, also based on the GP100 chip, which may be the fully-enabled sixty-core version. However, the 1080 Ti will absolutely be far, far cheaper than the Tesla - the P100 is in "if you have to ask, you can't afford it" pricing right now, while the 1080 Ti will be somewhere between $600 and $1000.

        Oh, and the Tesla P100 isn't even usable in a standard system. They made their own mezzanine socket with their own communication protocol, so it's only usable on custom-built server motherboards. The 1080 Ti, as a consumer card, will be PCIe. That alone is enough of a differentiator.

  • (Score: 2) by Dr Spin on Saturday May 07 2016, @06:10PM

    by Dr Spin (5239) on Saturday May 07 2016, @06:10PM (#342953)

    is there even the remotest possibility that the Linux drivers will actually work?

    I have spent the last 5 years actively avoiding NVidia products (or binning them).

    Both Open source and closed drivers appear to be worthless trash, and I am not going to use Windows just to please some hapless hardware manufacturer when I can buy from someone else. In simple terms: If they are so clever, why cant they write drivers that work?

    Disclaimer: I am quite happy with the command line, and a 4MB graphics card would probably suit me. Except that sometimes I want to do a bit of CAD or edit a video.

    --
    Warning: Opening your mouth may invalidate your brain!
    • (Score: 1, Insightful) by Anonymous Coward on Saturday May 07 2016, @06:42PM

      by Anonymous Coward on Saturday May 07 2016, @06:42PM (#342959)

      sometimes I want to do a bit of CAD or edit a video.

      I just want a workstation with a fast GPU to run Resolve. [blackmagicdesign.com] Apple don't make upgradable workstations that'll take cards like these and Blackmagic don't make Resolve available for linux unless you're buying the ($$$) turnkey package. I simply do not want to run Microsoft Windows. Since Microsoft made Windows 10 completely unsuitable for workstation or business use, Microsoft clearly don't want anyone running Windows either.

    • (Score: 2) by Gravis on Saturday May 07 2016, @06:56PM

      by Gravis (4596) on Saturday May 07 2016, @06:56PM (#342962)

      sounds like you have been using old ass drivers. anyway, given the low level nature of Vulkan, I wouldn't be surprised if an open source graphics card that was compatible with everything (e.g. OpenGL) actually popped up in the next few years. it might be FPGA based but it's a start.

      • (Score: 2) by bitstream on Saturday May 07 2016, @09:40PM

        by bitstream (6144) on Saturday May 07 2016, @09:40PM (#343010) Journal

        If one does go FPGA, why not just feed it OpenGL directly? ;-)
        Bypass all slow buses etc..
        (and get 3D remote display capability)

        • (Score: 2) by Gravis on Wednesday May 11 2016, @03:37PM

          by Gravis (4596) on Wednesday May 11 2016, @03:37PM (#344653)

          If one does go FPGA, why not just feed it OpenGL directly? ;-)
           

          because that would require a shitload of logic and would be limited to a single version of OpenGL.

    • (Score: 3, Insightful) by Anonymous Coward on Saturday May 07 2016, @08:18PM

      by Anonymous Coward on Saturday May 07 2016, @08:18PM (#342988)

      The only thing worse than Nvidia's drivers is AMD's. You'll be lucky to have 3 years of support[shitty] too.

      Nvidia has plenty of flaws....but come on now.

      • (Score: 2) by LoRdTAW on Thursday June 02 2016, @10:33PM

        by LoRdTAW (3755) on Thursday June 02 2016, @10:33PM (#354255) Journal

        When will this tired old myth propagated by nvidia fanbois ever die? And insightful? Please. Their drivers have been pretty stable for years. I never had an issue with my 6850 which is around 5 years old now and gone through multiple driver updates. Maybe in the early 2000's, the old Ati days. But seriously, knock it off.

    • (Score: 2) by bitstream on Saturday May 07 2016, @09:18PM

      by bitstream (6144) on Saturday May 07 2016, @09:18PM (#343003) Journal

      Maybe this [wikipedia.org] can inspire you?

      As another poster mentioned. FPGA is the key if you want to break out of the blob dependency. Though, do remember that the FPGA configuration software is closed and proprietary (freeware).

    • (Score: 0) by Anonymous Coward on Tuesday May 10 2016, @01:46PM

      by Anonymous Coward on Tuesday May 10 2016, @01:46PM (#344210)

      my clients and i don't buy anything with an nvidia chip in it in response to their hostility towards linux but nouveau works well enough with cards that are shown to be well supported in the nouveau feature matrix. AMD cards work well with the open source radeon driver and that's what we buy.

  • (Score: 1) by Elledan on Sunday May 08 2016, @10:12AM

    by Elledan (4807) on Sunday May 08 2016, @10:12AM (#343147)

    The GTX 980 Ti (heavy OCed) which I bought less than half a year ago will hold me over just fine for the next few years. I feel not even the slightest tinge of annoyance and bitterness at the inevitability of obsolescence of all cool gadgets I buy.