Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 8 submissions in the queue.
posted by takyon on Friday June 19 2015, @12:24AM   Printer-friendly

AMD has launched its 300 series GPUs. The new GPUs are considered "refreshes" of the "Hawaii" architecture, although there are some improvements. For example, the Radeon R7 360 has 2 GB of VRAM instead of the 1 GB of the Radeon R7 260, as well as a slightly higher memory clock. Radeon R9 390X and Radeon R9 390 boost clock speeds and double VRAM to 8 GB compared to the 4 GB of the 290X and 290, but will launch at a higher price than the older GPUs currently sell for. Is the VRAM boost worth it?

While one could write a small tome on the matter of memory capacity, especially in light of the fact that the Fury series only has 4GB of memory, ultimately the fact that the 390 series has 8GB now is due to a couple of factors. The first of which is the fact that 4GB Hawaii cards require 2Gb GDDR5 chips (2x16), a capacity that is slowly going away in favor of the 4Gb chips used on the Playstation 4 and many of the 2015 video cards. The other reason is that it allows AMD to exploit NVIDIA's traditional stinginess with VRAM; just as with the 290 series versus the GTX 780/770, this means AMD once again has a memory capacity advantage, which helps to shore up the value of their cards versus what NVIDIA offers at the same price.

Meanwhile with the above in mind, based on comments from AMD product managers, it sounds like the use of 4Gb chips also plays a part in the [20%] memory clockspeed increases we're seeing on the 390 series. Later generation chips don't just get bigger, but they get faster and operate at lower voltages as well, and from what we've seen it looks like AMD is taking advantage of all of these factors.

More interesting will be the Radeon R9 Fury X and Radeon R9 Fury, which use the new "Fiji" architecture. These will be AMD's first GPUs to ship with 4 GB of High Bandwidth Memory (HBM). Fury X is a water-cooled card that will launch June 24th for $649. Fury is an air-cooled version with less stream processors and texture units (lower yields) than the Fury X. It will launch on July 14th at $549. AMD claims that the new Fiji GPUs have 1.5 times the performance per watt of the R9 290X, partially due to the decrease in power needed by stacks of HBM vs. GDDR5 memory.

Later this summer, AMD will launch a 6" Fiji card with HBM called "Nano". AMD will launch a "Dual" card sometime in the fall, presumably the equivalent of two Fury X GPUs.

All of the GPUs mentioned above are still made on a 28nm process.

At the launch event, AMD featured a slide portraying current VR efforts as delivering 2K (1920×1080) per eye at a 90 Hz refresh rate using an ~8.6 TFLOPS AMD GPU. According to AMD, the VR of "tomorrow" will deliver 16K (15360×8640) per eye at a 120-240 Hz refresh rate using a >1,000 TFLOPS GPU:

  VR TODAY VR TOMORROW
Resolution Per Eye 2K 16K
Refresh Rate 90Hz 120-240Hz
GPU Engine for VR 8 TFLOPS >1 PETA FLOP

Additional Links:

Tom's Hardware: AMD Fury X And Fiji Preview
Tom's Hardware: AMD Radeon R9 390X, R9 380 And R7 370 Tested


Original Submission

Related Stories

AMD Shares More Details on High Bandwidth Memory 14 comments

Advanced Micro Devices (AMD) has shared more details about the High Bandwidth Memory (HBM) in its upcoming GPUs.

HBM in a nutshell takes the wide & slow paradigm to its fullest. Rather than building an array of high speed chips around an ASIC to deliver 7Gbps+ per pin over a 256/384/512-bit memory bus, HBM at its most basic level involves turning memory clockspeeds way down – to just 1Gbps per pin – but in exchange making the memory bus much wider. How wide? That depends on the implementation and generation of the specification, but the examples AMD has been showcasing so far have involved 4 HBM devices (stacks), each featuring a 1024-bit wide memory bus, combining for a massive 4096-bit memory bus. It may not be clocked high, but when it's that wide, it doesn't need to be.

AMD will be the only manufacturer using the first generation of HBM, and will be joined by NVIDIA in using the second generation in 2016. HBM2 will double memory bandwidth over HBM1. The benefits of HBM include increased total bandwidth (from 320 GB/s for the R9 290X to 512 GB/s in AMD's "theoretical" 4-stack example) and reduced power consumption. Although HBM1's memory bandwidth per watt is tripled compared to GDDR5, the memory in AMD's example uses a little less than half the power (30 W for the R9 290X down to 14.6 W) due to the increased bandwidth. HBM stacks will also use 5-10% as much area of the GPU to provide the same amount of memory that GDDR5 would. That could potentially halve the size of the GPU:

By AMD's own estimate, a single HBM-equipped GPU package would be less than 70mm × 70mm (4900mm2), versus 110mm × 90mm (9900mm2) for R9 290X.

HBM will likely be featured in high-performance computing GPUs as well as accelerated processing units (APUs). HotHardware reckons that Radeon 300-series GPUs featuring HBM will be released in June.

Interview With Raja Koduri, Head of the Radeon Technologies Group at AMD 27 comments

In a VentureBeat interview with Raja Koduri, head of the Radeon Technologies Group at AMD, the company continues to advocate for virtual reality running at "16K resolution" at up to 240 Hz:

When Advanced Micro Devices created its own stand-alone graphics division, Radeon Technologies Group, and crafted a new brand, Polaris, for its upcoming graphics architecture, it was an admission of sorts. AMD championed the combination of processors and graphics into a single chip, dubbed the accelerated processing unit (APU). But the pendulum swung a little too far in that direction, away from stand-alone graphics. And now it's Raja Koduri's job to compensate for that.

I interviewed Koduri at the 2016 International CES, the big tech trade show in Las Vegas last week. He acknowledged that AMD intends to put graphics back in the center. And he said that 2016 will be a very big year for the company as it introduces its advanced FinFET manufacturing technology, which will result in much better performance per watt — or graphics that won't melt your computer. Koduri believes this technology will help AMD beat rivals such as Nvidia. AMD's new graphics chips will hit during the middle of 2016, Koduri said.

Beyond 2016, Koduri believes that graphics is going to get more and more amazing. Virtual reality is debuting, but we won't be completely satisfied with the imagery until we get 3D graphics that can support 16K screens, or at least 16 times more pixels on a screen that[sic] we have available on most TVs today. Koduri wants to pump those pixels at you at a rate of 240 hertz, or changing the pixels at a rate of 240 times per second. Only then will you really experience true immersion that you won't be able to tell apart from the real world. He calls it "mirror-like" graphics. That's pretty far out thinking.

AMD's "Polaris" GPUs will be released sometime during the summer of 2016. Along with AMD's "Zen" CPUs and APUs, Polaris GPUs will be built using a 14nm FinFET process, skipping the 20nm node.


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Snotnose on Friday June 19 2015, @12:39AM

    by Snotnose (1623) on Friday June 19 2015, @12:39AM (#198051)

    6 years ago my PC power supply died at the same time I was getting divorced, result was I bought a PS3 and have been gaming on it ever since.

    Now that PS3 is end of life'd I need to decide if I want to buy a PS4/Xbox or build a PC. Suppose I decide to build. Assuming I want to keep the price competitive with the PS4/Xbox, and I'll use my TV for a monitor, what would be a good website telling me what components to use to build the PC?

    --
    In this month in 1958 Project Snot was started. This has upset many people and is widely considered a bad idea.
  • (Score: 3, Interesting) by MichaelDavidCrawford on Friday June 19 2015, @08:01AM

    I know a couple of Atari's original game designers. Dave Johnson told me the 2600 had a single-pixel "frame buffer" that you drew into by changing its value in sync with the tv tube's electron beam. Anything time consuming had to be done during the vertical blanking interval.

    One had a choice of 2kB or 4kB cartridges, but the 4kB would result in one being paid half the royalties that would come from a 2kB implementation.

    Dave told me that the game designers would sometimes go insane as a result of their effort to shink their code size. Imagine your game works fine with 2049 bytes of code but you can't find a way to shave that last byte off without breaking something.

    --
    Yes I Have No Bananas. [gofundme.com]
  • (Score: 1) by rleigh on Friday June 19 2015, @10:25AM

    by rleigh (4887) on Friday June 19 2015, @10:25AM (#198180) Homepage

    I'd like to upgrade my graphics card; my last two have been a Radeon HD 4850 and currently an HD 6800. Both worked well, but I really don't like the trend of needing such obscene power draw (275W!) to paint pixels. It's the biggest power usage in the system by a large margin, and is a real waste of electricity and can make the room quite unpleasantly hot in the summer. The mobile graphics solutions are much lower power, but if you want something decent there doesn't seem to be a good compromise for desktop usage.