Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Tuesday January 15 2019, @12:59AM   Printer-friendly
from the more-and-faster-and-cheaper...how'd-they-do-that? dept.

At AMD's CES 2019 keynote, CEO Lisa Su revealed the Radeon VII, a $700 GPU built on TSMC's "7nm" process. The GPU should have around the same performance and price as Nvidia's already-released RTX 2080. While it does not have any dedicated ray-tracing capabilities, it includes 16 GB of High Bandwidth Memory.

Nvidia's CEO has trashed his competitor's new GPU, calling it "underwhelming" and "lousy". Meanwhile, Nvidia has announced that it will support Adaptive Sync, the standardized version of AMD's FreeSync dynamic refresh rate and anti-screen tearing technology. Lisa Su also says that AMD is working on supporting ray tracing in future GPUs, but that the ecosystem is not ready yet.

Su also showed off a third-generation Ryzen CPU at the CES keynote, but did not announce a release date or lineup details. Like the second generation of Epyc server CPUs, the new Ryzen CPUs will be primarily built on TSMC's "7nm" process, but will include a "14nm" GlobalFoundries I/O part that includes the memory controllers and PCIe lanes. The CPUs will support PCIe 4.0.

The Ryzen 3000-series ("Matisse") should provide a roughly 15% single-threaded performance increase while significantly lowering power consumption. However, it has been speculated that the chips could include up to 16 cores or 8 cores with a separate graphics chiplet. AMD has denied that there will be a variant with integrated graphics, but Lisa Su has left the door open for 12- or 16-core versions of Ryzen, saying that "There is some extra room on that package, and I think you might expect we'll have more than eight cores". Here's "that package".

Also at The Verge.

Previously: Watch AMD's CES 2019 Keynote Live: 9am PT/12pm ET/5pm UK


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Insightful) by Anonymous Coward on Tuesday January 15 2019, @01:07AM (2 children)

    by Anonymous Coward on Tuesday January 15 2019, @01:07AM (#786731)

    This is not a gaming card, it is a compute card for (high-end) gaming prices. A major bottleneck of computing on GPU is moving data from system memory to GPU memory, the number of cores and clock speed really isn't a big deal in comparison. Is there a comparable card with 16GB of GPU memory?

    On amazon [amazon.com] the only comparable cards I see are in the thousands of dollars range.

    Starting Score:    0  points
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  

    Total Score:   1  
  • (Score: 1, Insightful) by Anonymous Coward on Tuesday January 15 2019, @03:20AM (1 child)

    by Anonymous Coward on Tuesday January 15 2019, @03:20AM (#786771)

    I already looked. The RTX2060 is the closest 'cheap' card that Nvidia has, at 350 for 6GIG ram and competitive performance to the Vega 56 (it is comparable to RX570/580 on Single precision, but close to Vega 56 on FP16/FP64).

    As far as GPU memory goes, AMD is hands up the winner compared to NVidia, with both HBM and larger capacities than Nvidia's GDDR6 devices. However between its lack of Cuda or even just hands off Cuda to OpenCL translation, it loses for a lot of applications, as well as where drive polish matters (their drives are STILL not coming out reliable a year later, just like when they started during GCN 1.0. Combined with their DRMed video bios and further locking down and reduced documentation, AMD is only compelling on cost and for a subset of workloads you can get away with them on. I don't imagine Intel is going to fare well against either of them when they finally release in a few years, but there is an opening in the market if Qualcomm or Broadcom chose to produce a discrete version of their own video hardware, both of which already have open source drivers and at least partial open source firmware thanks to reverse engineering. Since neither appears to required signed firmware blobs, both would provide a more libre alternative for open source systems, and once the money was there should have no trouble funding development for HBM based higher performance models. In the meantime, models with higher clocks and dedicated memory/bus access should be competitive with low end GPU hardware that has stagnated for at least the past 5 years, leaving openings until the other market members drop their cards down to reasonable price points once more.