Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Saturday February 09 2019, @04:42AM   Printer-friendly
from the I-want-one-on-my-cellphone dept.

AMD, Nvidia Have Launched the Least-Appealing GPU Upgrades in History

Yesterday, AMD launched the Radeon VII, the first 7nm GPU. The card is intended to compete with Nvidia's RTX family of Turing-class GPUs, and it does, broadly matching the RTX 2080. It also matches the RTX 2080 on price, at $700. Because this card began life as a professional GPU intended for scientific computing and AI/ML workloads, it's unlikely that we'll see lower-end variants. That section of AMD's product stack will be filled by 7nm Navi, which arrives later this year.

Navi will be AMD's first new 7nm GPU architecture and will offer a chance to hit 'reset' on what has been, to date, the least compelling suite of GPU launches AMD and Nvidia have ever collectively kicked out the door. Nvidia has relentlessly moved its stack pricing higher while holding performance per dollar mostly constant. With the RTX 2060 and GTX 1070 Ti fairly evenly matched across a wide suite of games, the question of whether the RTX 2060 is better priced largely hinges on whether you stick to formal launch pricing for both cards or check historical data for actual price shifts.

Such comparisons are increasingly incidental, given that Pascal GPU prices are rising and cards are getting harder to find, but they aren't meaningless for people who either bought a Pascal GPU already or are willing to consider a used card. If you're an Nvidia fan already sitting on top of a high-end Pascal card, Turing doesn't offer you a great deal of performance improvement.

AMD has not covered itself in glory, either. The Radeon VII is, at least, unreservedly faster than the Vega 64. There's no equivalent last-generation GPU in AMD's stack to match it. But it also duplicates the Vega 64's overall power and noise profile, limiting the overall appeal, and it matches the RTX 2080's bad price. A 1.75x increase in price for a 1.32x increase in 4K performance isn't a great ratio even by the standards of ultra-high-end GPUs, where performance typically comes with a price penalty.

Rumors and leaks have suggested that Nvidia will release a Turing-based GPU called the GTX 1660 Ti (which has also been referred to as "1160"), with a lower price but missing the dedicated ray-tracing cores of the RTX 2000-series. AMD is expected to release "7nm" Navi GPUs sometime during 2019.

Radeon VII launch coverage also at AnandTech, Tom's Hardware.

Related: AMD Returns to the Datacenter, Set to Launch "7nm" Radeon Instinct GPUs for Machine Learning in 2018
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
AMD Announces "7nm" Vega GPUs for the Enterprise Market
Nvidia Announces RTX 2060 GPU
AMD Announces Radeon VII GPU, Teases Third-Generation Ryzen CPU
AMD Responds to Radeon VII Short Supply Rumors


Original Submission

Related Stories

AMD Returns to the Datacenter, Set to Launch "7nm" Radeon Instinct GPUs for Machine Learning in 2018 6 comments

AMD 7nm Vega Radeon Instinct GPU AI Accelerators Enter Lab Testing

AMD's current generation Vega graphics architecture – which powers its Radeon RX Vega family of graphics cards -- is based on a 14nm manufacturing process, but the chip company is already moving along with next generation process technology. During the company's conference call with analysts following its Q1 2018 earnings report (which it knocked out of the park, by the way), AMD CEO Dr. Lisa Su made some comments regarding its upcoming 7nm GPUs.

"I'm also happy to report that our next-generation 7-nanometer Radeon Instinct product, optimized for machine learning workloads, is running in our labs," said Dr. Su. "We remain on track to provide samples to customers later this year."

If you recall, Radeon Instinct is AMD's product line for machine intelligences and deep learning accelerators. The current lineup features a mixture of Polaris- and Vega-based GPUs and could be considered competitors for NVIDIA's Tesla family of products. [...] According to commentary from AMD at this year's CES, 7nm Vega products for mobile along with the 7nm Radeon Instinct accelerators will ship during the latter half of 2018.

From The Next Platform, "The Slow But Sure Return Of AMD In The Datacenter":

Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations 8 comments

NVIDIA Reveals Next-Gen Turing GPU Architecture: NVIDIA Doubles-Down on Ray Tracing, GDDR6, & More

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest Turing parts can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

The Turing architecture also carries over the tensor cores from Volta, and indeed these have even been enhanced over Volta. The tensor cores are an important aspect of multiple NVIDIA initiatives. Along with speeding up ray tracing itself, NVIDIA's other tool in their bag of tricks is to reduce the amount of rays required in a scene by using AI denoising to clean up an image, which is something the tensor cores excel at. Of course that's not the only feature tensor cores are for – NVIDIA's entire AI/neural networking empire is all but built on them – so while not a primary focus for the SIGGRAPH crowd, this also confirms that NVIDIA's most powerful neural networking hardware will be coming to a wider range of GPUs.

New to Turing is support for a wider range of precisions, and as such the potential for significant speedups in workloads that don't require high precisions. On top of Volta's FP16 precision mode, Turing's tensor cores also support INT8 and even INT4 precisions. These are 2x and 4x faster than FP16 respectively, and while NVIDIA's presentation doesn't dive too deep here, I would imagine they're doing something similar to the data packing they use for low-precision operations on the CUDA cores. And without going too deep ourselves here, while reducing the precision of a neural network has diminishing returns – by INT4 we're down to a total of just 16(!) values – there are certain models that really can get away with this very low level of precision. And as a result the lower precision modes, while not always useful, will undoubtedly make some users quite happy at the throughput, especially in inferencing tasks.

Also of note is the introduction of GDDR6 into some GPUs. The NVIDIA Quadro RTX 8000 will come with 24 GB of GDDR6 memory and a total memory bandwidth of 672 GB/s, which compares favorably to previous-generation GPUs featuring High Bandwidth Memory. Turing supports the recently announced VirtualLink. The video encoder block has been updated to include support for 8K H.265/HEVC encoding.

Ray-tracing combined with various (4m27s video) shortcuts (4m16s video) could be used for good-looking results in real time.

Also at Engadget, Notebookcheck, and The Verge.

See also: What is Ray Tracing and Why Do You Want it in Your GPU?


Original Submission

Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance 23 comments

NVIDIA Announces the GeForce RTX 20 Series: RTX 2080 Ti & 2080 on Sept. 20th, RTX 2070 in October

NVIDIA's Gamescom 2018 keynote just wrapped up, and as many have been expecting since it was announced last month, NVIDIA is getting ready to launch their next generation of GeForce hardware. Announced at the event and going on sale starting September 20th is NVIDIA's GeForce RTX 20 series, which is succeeding the current Pascal-powered GeForce GTX 10 series. Based on NVIDIA's new Turing GPU architecture and built on TSMC's 12nm "FFN" process, NVIDIA has lofty goals, looking to drive an entire paradigm shift in how games are rendered and how PC video cards are evaluated. CEO Jensen Huang has called Turing NVIDIA's most important GPU architecture since 2006's Tesla GPU architecture (G80 GPU), and from a features standpoint it's clear that he's not overstating matters.

[...] So what does Turing bring to the table? The marquee feature across the board is hybrid rendering, which combines ray tracing with traditional rasterization to exploit the strengths of both technologies. This announcement is essentially a continuation of NVIDIA's RTX announcement from earlier this year, so if you thought that announcement was a little sparse, well then here is the rest of the story.

The big change here is that NVIDIA is going to be including even more ray tracing hardware with Turing in order to offer faster and more efficient hardware ray tracing acceleration. New to the Turing architecture is what NVIDIA is calling an RT core, the underpinnings of which we aren't fully informed on at this time, but serve as dedicated ray tracing processors. These processor blocks accelerate both ray-triangle intersection checks and bounding volume hierarchy (BVH) manipulation, the latter being a very popular data structure for storing objects for ray tracing.

NVIDIA is stating that the fastest GeForce RTX part can cast 10 Billion (Giga) rays per second, which compared to the unaccelerated Pascal is a 25x improvement in ray tracing performance.

Nvidia has confirmed that the machine learning capabilities (tensor cores) of the GPU will used to smooth out problems with ray-tracing. Real-time AI denoising (4m17s) will be used to reduce the amount of samples per pixel needed to achieve photorealism.

Previously: Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations

Related: Real-time Ray-tracing at GDC 2014


Original Submission

AMD Announces "7nm" Vega GPUs for the Enterprise Market 3 comments

AMD Announces Radeon Instinct MI60 & MI50 Accelerators: Powered By 7nm Vega

As part of this morning's Next Horizon event, AMD formally announced the first two accelerator cards based on the company's previously revealed 7nm Vega GPU. Dubbed the Radeon Instinct MI60 and Radeon Instinct MI50, the two cards are aimed squarely at the enterprise accelerator market, with AMD looking to significantly improve their performance competitiveness in everything from HPC to machine learning.

Both cards are based on AMD's 7nm GPU, which although we've known about at a high level for some time now, we're only finally getting some more details on. GPU is based on a refined version of AMD's existing Vega architecture, essentially adding compute-focused features to the chip that are necessary for the accelerator market. Interestingly, in terms of functional blocks here, 7nm Vega is actually rather close to the existing 14nm "Vega 10" GPU: both feature 64 CUs and HBM2. The difference comes down to these extra accelerator features, and the die size itself.

With respect to accelerator features, 7nm Vega and the resulting MI60 & MI50 cards differentiates itself from the previous Vega 10-powered MI25 in a few key areas. 7nm Vega brings support for half-rate double precision – up from 1/16th rate – and AMD is supporting new low precision data types as well. These INT8 and INT4 instructions are especially useful for machine learning inferencing, where high precision isn't necessary, with AMD able to get up to 4x the perf of an FP16/INT16 data type when using the smallest INT4 data type. However it's not clear from AMD's presentation how flexible these new data types are – and with what instructions they can be used – which will be important for understanding the full capabilities of the new GPU. All told, AMD is claiming a peak throughput of 7.4 TFLOPS FP64, 14.7 TFLOPS FP32, and 118 TOPS for INT4.

Previously: AMD Returns to the Datacenter, Set to Launch "7nm" Radeon Instinct GPUs for Machine Learning in 2018

Related: AMD Previews Zen 2 Epyc CPUs with up to 64 Cores, New "Chiplet" Design


Original Submission

Nvidia Announces RTX 2060 GPU 10 comments

The NVIDIA GeForce RTX 2060 6GB Founders Edition Review: Not Quite Mainstream

In the closing months of 2018, NVIDIA finally released the long-awaited successor to the Pascal-based GeForce GTX 10 series: the GeForce RTX 20 series of video cards. Built on their new Turing architecture, these GPUs were the biggest update to NVIDIA's GPU architecture in at least half a decade, leaving almost no part of NVIDIA's architecture untouched.

So far we've looked at the GeForce RTX 2080 Ti, RTX 2080, and RTX 2070 – and along with the highlights of Turing, we've seen that the GeForce RTX 20 series is designed on a hardware and software level to enable realtime raytracing and other new specialized features for games. While the RTX 2070 is traditionally the value-oriented enthusiast offering, NVIDIA's higher price tags this time around meant that even this part was $500 and not especially value-oriented. Instead, it would seem that the role of the enthusiast value offering is going to fall to the next member in line of the GeForce RTX 20 family. And that part is coming next week.

Launching next Tuesday, January 15th is the 4th member of the GeForce RTX family: the GeForce RTX 2060 (6GB). Based on a cut-down version of the same TU106 GPU that's in the RTX 2070, this new part shaves off some of RTX 2070's performance, but also a good deal of its price tag in the process

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance


Original Submission

AMD Announces Radeon VII GPU, Teases Third-Generation Ryzen CPU 15 comments

At AMD's CES 2019 keynote, CEO Lisa Su revealed the Radeon VII, a $700 GPU built on TSMC's "7nm" process. The GPU should have around the same performance and price as Nvidia's already-released RTX 2080. While it does not have any dedicated ray-tracing capabilities, it includes 16 GB of High Bandwidth Memory.

Nvidia's CEO has trashed his competitor's new GPU, calling it "underwhelming" and "lousy". Meanwhile, Nvidia has announced that it will support Adaptive Sync, the standardized version of AMD's FreeSync dynamic refresh rate and anti-screen tearing technology. Lisa Su also says that AMD is working on supporting ray tracing in future GPUs, but that the ecosystem is not ready yet.

Su also showed off a third-generation Ryzen CPU at the CES keynote, but did not announce a release date or lineup details. Like the second generation of Epyc server CPUs, the new Ryzen CPUs will be primarily built on TSMC's "7nm" process, but will include a "14nm" GlobalFoundries I/O part that includes the memory controllers and PCIe lanes. The CPUs will support PCIe 4.0.

The Ryzen 3000-series ("Matisse") should provide a roughly 15% single-threaded performance increase while significantly lowering power consumption. However, it has been speculated that the chips could include up to 16 cores or 8 cores with a separate graphics chiplet. AMD has denied that there will be a variant with integrated graphics, but Lisa Su has left the door open for 12- or 16-core versions of Ryzen, saying that "There is some extra room on that package, and I think you might expect we'll have more than eight cores". Here's "that package".

Also at The Verge.

Previously: Watch AMD's CES 2019 Keynote Live: 9am PT/12pm ET/5pm UK


Original Submission

AMD Responds to Radeon VII Short Supply Rumors 4 comments

AMD Responds to Radeon VII Short Supply Rumours

A few days ago we reported on rumours which alleged that AMD's Radeon VII graphics card would be in short supply, with a report claiming that AMD had "less than 5,000", units to sell.

The report also stated that AMD would also lose money on every graphics card sold, likely due to the device's workstation/datacenter origins and its use of 16GB of costly HBM2 memory.

This morning AMD has released an official response to these rumours, claiming that the company expects to meet demand from gamers, declining to release detailed production numbers. On top of that, AMD also confirmed that the company's AIB partners would be selling Radeon VII graphics cards, alongside their retail presence on AMD.com, which means that AMD has produced their new graphics card in large enough quantities for AIBs to receive a sizable stock allocation.

Will AMD's Lisa Su Step Up as Intel's Next CEO?

Intel's next CEO is a hot topic in the tech sector. Rumors suggest that the company plans to announce its new CEO before its fourth quarter of 2018 earnings release on January 24. Intel's only rival in the PC and server CPU market is Advanced Micro Devices (AMD). Speculation of an Intel–AMD merger keeps popping up, but it's unwarranted. The merger can never be a reality, as it would remove competition from the CPU market.

At CES 2019 (the Consumer Electronics Show), AMD overshadowed Intel with its 7nm (nanometer) product announcements. AMD's presentation once again sparked speculation of an Intel–AMD merger. An article in EE Times cited Jon Peddie Research vice president Kathleen Maher's views on this speculation.

She dismissed the speculation that Intel might acquire AMD, stating that AMD has nothing Intel wants except a CEO. Her comments were reiterated by Tirias Research principal analyst Kevin Krewell, who told EE Times that Intel "could try to hire Lisa Su, but that would be hard as well."

Previously: AMD Announces Radeon VII GPU, Teases Third-Generation Ryzen CPU
Intel Core i9-9990XE: Up to 5.0 GHz, Auction Only; AMD Radeon VII: Less Than 5,000 Available


Original Submission #1Original Submission #2

Crytek Demos Real-Time Raytracing for AMD and Non-RTX Nvidia GPUs 5 comments

Crytek Demos Noir, a CRYENGINE Based Real-Time Raytracing Demo on AMD Radeon RX Vega 56 – Can Run on Most Mainstream, Contemporary AMD and NVIDIA GPUs

Crytek has showcased a new real-time raytracing demo which is said to run on most mainstream, contemporary GPUs from NVIDIA and AMD. The minds behind one of the most visually impressive FPS franchise, Crysis, have their new "Noir" demo out which was run on an AMD Radeon RX Vega graphics card which shows that raytracing is possible even without an NVIDIA RTX graphics card.

[...] Crytek states that the experimental ray tracing feature based on CRYENGINE's Total Illumination used to create the demo is both API and hardware agnostic, enabling ray tracing to run on most mainstream, contemporary AMD and NVIDIA GPUs. However, the future integration of this new CRYENGINE technology will be optimized to benefit from performance enhancements delivered by the latest generation of graphics cards and supported APIs like Vulkan and DX12.

Related: Real-time Ray-tracing at GDC 2014
Microsoft Announces Directx 12 Raytracing API
Nvidia Announces Turing Architecture With Focus on Ray-Tracing and Lower-Precision Operations
Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Q2VKPT: An Open Source Game Demo with Real-Time Path Tracing
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing
Nvidia Ditches the Ray-Tracing Cores with Lower-Priced GTX 1660 Ti


Original Submission

Nvidia Enables Support for DirectX Raytracing on Non-RTX GPUs, Results Lackluster 11 comments

NVIDIA Releases DirectX Raytracing Driver for GTX Cards; Posts Trio of DXR Demos

Last month at GDC 2019, NVIDIA revealed that they would finally be enabling public support for DirectX Raytracing on non-RTX cards. Long baked into the DXR specification itself – which is designed [to] encourage ray tracing hardware development while also allowing it to be implemented via traditional compute shaders – the addition of DXR support in cards without hardware support for it is a small but important step in the deployment of the API and its underlying technology. At the time of their announcement, NVIDIA announced that this driver would be released in April, and now this morning, NVIDIA is releasing the new driver.

As we covered in last month's initial announcement of the driver, this has been something of a long time coming for NVIDIA. The initial development of DXR and the first DXR demos (including the Star Wars Reflections demo) were all handled on cards without hardware RT acceleration; in particular NVIDIA Volta-based video cards. Microsoft used their own fallback layer for a time, but for the public release it was going to be up to GPU manufacturers to provide support, including their own fallback layer. So we have been expecting the release of this driver in some form for quite some time.

Of course, the elephant in the room in enabling DXR on cards without RT hardware is what it will do for performance – or perhaps the lack thereof.

Also at Wccftech.

See also: NVIDIA shows how much ray-tracing sucks on older GPUs

[For] stuff that really adds realism, like advanced shadows, global illumination and ambient occlusion, the RTX 2080 Ti outperforms the 1080 Ti by up to a factor of six.

To cite some specific examples, Port Royal will run on the RTX 2080 Ti at 53.3 fps at 2,560 x 1,440 with advanced reflections and shadows, along with DLSS anti-aliasing, turned on. The GTX 1080, on the other hand, will run at just 9.2 fps with those features enabled and won't give you any DLSS at all. That effectively makes the feature useless on those cards for that game. With basic reflections on Battlefield V, on the other hand, you'll see 30 fps on the 1080 Ti compared to 68.3 on the 2080 Ti.

Previously:


Original Submission

Nvidia Refreshes RTX 2000-Series GPUs With "Super" Branding 9 comments

The GeForce RTX 2070 Super & RTX 2060 Super Review: Smaller Numbers, Bigger Performance

NVIDIA is launching a mid-generation kicker for their mid-to-high-end video card lineup in the form of their GeForce RTX 20 series Super cards. Based on the same family of Turing GPUs as the original GeForce RTX 20 series cards, these new Super cards – all suffixed Super, appropriately enough – come with new configurations and new clockspeeds. They are, essentially, NVIDIA's 2019 card family for the $399+ video card market.

When they are released on July 9th, the GeForce RTX 20 series Super cards are going to be sharing store shelves with the rest of the GeForce RTX 20 series cards. Some cards like the RTX 2080 and RTX 2070 are set to go away, while other cards like the RTX 2080 Ti and RTX 2060 will remain on the market as-is. In practice, it's probably best to think of the new cards as NVIDIA executing as either a price cut or a spec bump – depending on if you see the glass as half-empty or half-full – all without meaningfully changing their price tiers.

In terms of performance, the RTX 2060 and RTX 2070 Super cards aren't going to bring anything new to the table. In fact if we're being blunt, the RTX 2070 Super is basically a slightly slower RTX 2080, and the RTX 2060 Super may as well be the RTX 2070. So instead, what has changed is the price that these performance levels are available at, and ultimately the performance-per-dollar ratios in parts of NVIDIA's lineup. The performance of NVIDIA's former $699 and $499 cards will now be available for $499 and $399, respectively. This leaves the vanilla RTX 2060 to hold the line at $349, and the upcoming RTX 2080 Super to fill the $699 spot. Which means if you're in the $400-$700 market for video cards, your options are about to get noticeably faster.

Also at Tom's Hardware, The Verge, and Ars Technica.

Previously: Nvidia Announces RTX 2080 Ti, 2080, and 2070 GPUs, Claims 25x Increase in Ray-Tracing Performance
Nvidia Announces RTX 2060 GPU
AMD and Nvidia's Latest GPUs Are Expensive and Unappealing

Related: AMD and Intel at Computex 2019: First Ryzen 3000-Series CPUs and Navi GPU Announced
AMD Details Three Navi GPUs and First Mainstream 16-Core CPU


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Interesting) by Anonymous Coward on Saturday February 09 2019, @05:07AM (3 children)

    by Anonymous Coward on Saturday February 09 2019, @05:07AM (#798712)

    Let's all love Lain!

    • (Score: 0) by Anonymous Coward on Saturday February 09 2019, @05:27AM (2 children)

      by Anonymous Coward on Saturday February 09 2019, @05:27AM (#798717)

      Hey! Listen!

      • (Score: 1, Funny) by Anonymous Coward on Saturday February 09 2019, @06:32AM (1 child)

        by Anonymous Coward on Saturday February 09 2019, @06:32AM (#798734)

        Sorry, can't. Running Systemd and Pulseaudio. What?

        • (Score: 0) by Anonymous Coward on Saturday February 09 2019, @03:07PM

          by Anonymous Coward on Saturday February 09 2019, @03:07PM (#798831)

          You'll never gonna lennart.

  • (Score: 1, Insightful) by Anonymous Coward on Saturday February 09 2019, @05:22AM (4 children)

    by Anonymous Coward on Saturday February 09 2019, @05:22AM (#798716)

    We should not be surprised as the manufacturing at 7nm must be far more complex and costly than 10nm or 14nm. The price reflects that. In time, as usual, it will come down. But for now the early adopters pay for the bulk of the R&D. Same pattern with any technology.

    • (Score: 0) by Anonymous Coward on Saturday February 09 2019, @08:00AM (3 children)

      by Anonymous Coward on Saturday February 09 2019, @08:00AM (#798751)

      Complete and utter bs.
      They just want more money for a similar product

      • (Score: 3, Touché) by NateMich on Saturday February 09 2019, @08:30AM (2 children)

        by NateMich (6662) on Saturday February 09 2019, @08:30AM (#798757)

        Actually, that's complete and utter truth.
        But I'm sure you're right about them wanting more money.

        • (Score: 2, Insightful) by Anonymous Coward on Saturday February 09 2019, @03:04PM (1 child)

          by Anonymous Coward on Saturday February 09 2019, @03:04PM (#798828)

          There is no truth in capitalism, only lies and high prices.

          • (Score: 3, Insightful) by acid andy on Sunday February 10 2019, @12:40AM

            by acid andy (1683) on Sunday February 10 2019, @12:40AM (#798963) Homepage Journal

            only lies and low wages.

            FTFY.

            --
            If a cat has kittens, does a rat have rittens, a bat bittens and a mat mittens?
  • (Score: 5, Interesting) by black6host on Saturday February 09 2019, @05:34AM (2 children)

    by black6host (3827) on Saturday February 09 2019, @05:34AM (#798719) Journal

    I've been chasing the video card manufacturers since the days of Hercules cards. Nowadays, I'm way behind the curve. A lowly GTX 960. And, it plays everything I want so fuck all that spending! I'll let the rich folks pay the crazy prices and wait for, as always happens, the tides to turn.

    • (Score: 2) by richtopia on Saturday February 09 2019, @06:54PM (1 child)

      by richtopia (3160) on Saturday February 09 2019, @06:54PM (#798890) Homepage Journal

      I am surprised that the GPU manufacturers aren't pursuing the budget market more. Both companies have new GPU architectures and access to new manufacturing lines, so why are there no sub $200 cards younger than two years old? AMD has demonstrated Vega can in APU configurations with less than 56 compute units, but there are no dedicated GPU solutions fitting that market segment.

  • (Score: 3, Interesting) by bzipitidoo on Saturday February 09 2019, @05:47AM (3 children)

    by bzipitidoo (4388) on Saturday February 09 2019, @05:47AM (#798721) Journal

    If die sizes still mean anything, haven't been totally distorted by marketing, how is it that CPUs have only just moved from 14 nm to 10 nm or maybe 12 nm, while GPUs are already at 7 nm? What did I miss?

    • (Score: 3, Informative) by takyon on Saturday February 09 2019, @06:40AM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday February 09 2019, @06:40AM (#798735) Journal

      The only "7nm" process in use right now is TSMC's. And it's been used to make Apple's SoCs and some other products:

      https://en.wikichip.org/wiki/apple/ax/a12 [wikichip.org]

      AMD decided to release a GPU on it, seemingly available only in very limited quantities [tomshardware.com], before launching "7nm" CPUs later this year (probably in May, no earlier than March, or no later than July).

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Saturday February 09 2019, @08:33AM (1 child)

      by Anonymous Coward on Saturday February 09 2019, @08:33AM (#798758)

      What did I miss?

      The last couple of years of semiconductor news, apparently.

      • (Score: 1, Interesting) by Anonymous Coward on Saturday February 09 2019, @03:13PM

        by Anonymous Coward on Saturday February 09 2019, @03:13PM (#798834)

        Also memory usually moves first to most new techs as it is 'easy and regular'. Most of a GPU is memory.

  • (Score: 2, Interesting) by Anonymous Coward on Saturday February 09 2019, @12:42PM (4 children)

    by Anonymous Coward on Saturday February 09 2019, @12:42PM (#798790)

    As long as I can play Senran Kagura: Peach Beach Splash, plus all those lewd games by Illusion, I'm probably good with my graphics card (Radeon RX 470). These fancy lithographies do nothing to make the tits look any better, so why would I buy them? Where's the part on the spec sheet where they brag about how cleanly they can render a variety of nipple puffinesses?

    • (Score: 0) by Anonymous Coward on Saturday February 09 2019, @03:06PM

      by Anonymous Coward on Saturday February 09 2019, @03:06PM (#798830)

      VR pom?

    • (Score: 2) by takyon on Saturday February 09 2019, @05:42PM

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Saturday February 09 2019, @05:42PM (#798871) Journal

      Given the amount of Japanese media (novels, manga, anime) about post-Singularity VR titties and VRMMOs, you will probably be pushed into getting a better GPU at some point.

      But if you are planning on skipping several generations, the absolute low-end of GPUs might become more interesting to you than the high end. Imagine something in the price lane of AMD RX 460, Nvidia GTX 1050 or even the GT 1030, except a few lithography nodes down the line. You could imagine cards like that outperforming RX 470 at a lower cost, lower power consumption, with more than 4 GB VRAM, on something like the TSMC "5nm" node.

      If we end up giving Moore the finger and getting orders of magnitude more GPU performance, than we can talk about absurd scene complexity, 8-16K resolution, 240 Hz, and real-time raytracing... all in a standalone VR headset that doesn't warm your face.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 0) by Anonymous Coward on Sunday February 10 2019, @01:27PM

      by Anonymous Coward on Sunday February 10 2019, @01:27PM (#799093)

      Just think about this for a moment. One day, hentai titles like that may be indistinguishable from real life porn. Just imagine. It's still going to be generated actors, just deepfaked well enough to look real.

      Can't wait.

    • (Score: 0) by Anonymous Coward on Sunday February 10 2019, @01:44PM

      by Anonymous Coward on Sunday February 10 2019, @01:44PM (#799097)

      I think I am in love [peachbeachsplash.com].

  • (Score: 0) by Anonymous Coward on Saturday February 09 2019, @05:07PM (1 child)

    by Anonymous Coward on Saturday February 09 2019, @05:07PM (#798863)

    VR + PhysX, droll!
    there once existed a dedicated pci(e?) slot card just to do the physX stuff ...
    ofc all the nintendo console are so appealing because everything runs smooth at framerates ...
    the PC is a frankenstein and has gazillion component-combos possibilities ...
    sheesh .. i don't know what to say. so the next generation of GPU isn't as ...uhm...errr... game changing as anticipated? oh well.
    anyways, personally i would prefer more physics then more graphic-realism in the next game iteration.

  • (Score: 1, Insightful) by Anonymous Coward on Sunday February 10 2019, @04:32AM (1 child)

    by Anonymous Coward on Sunday February 10 2019, @04:32AM (#799005)

    "A 1.75x increase in price for a 1.32x increase in 4K performance"

    You assume that the relationship between price and performance is linear. That's almost never true. The price difference between a car with a max speed of 200 MPH vs one with a max speed of 100 MPH is proportionately larger than the price difference between a car with a max speed of 100 MPH vs one with a max speed of 50 MPH. What about a car with a max speed of 400 MPH vs one with a max speed of 200 MPH. Eventually you reach a limit where the price difference between a car with a max speed of X vs one with a max speed of X+1 approaches infinity.

    • (Score: 0) by Anonymous Coward on Sunday February 10 2019, @02:38PM

      by Anonymous Coward on Sunday February 10 2019, @02:38PM (#799115)

      They didn't claim that there is a perfectly linear relationship between price and performance at the high end.

      "A 1.75x increase in price for a 1.32x increase in 4K performance isn't a great ratio even by the standards of ultra-high-end GPUs, where performance typically comes with a price penalty."

(1)