Intel Loses 5X More Average Performance Than AMD From Mitigations: Report
Intel has published its own set of benchmark results for the mitigations to the latest round of vulnerabilities, but Phoronix, a publication that focuses on Linux-related news and reviews, has conducted its own testing and found a significant impact. Phoronix's recent testing of all mitigations in Linux found the fixes reduce Intel's performance by 16% (on average) with Hyper-Threading enabled, while AMD only suffers a 3% average loss. Phoronix derived these percentages from the geometric mean of test results from its entire test suite.
From a performance perspective, the overhead of the mitigations narrow the gap between Intel and AMD's processors. Intel's chips can suffer even more with Hyper-Threading (HT) disabled, a measure that some companies (such as Apple and Google) say is the only way to make Intel processors completely safe from the latest vulnerabilities. In some of Phoronix's testing, disabling HT reduced performance almost 50%. The difference was not that great in many cases, but the gap did widen in almost every test by at least a few points.
To be clear, this is not just testing with mitigations for MDS (also known as Fallout, Zombieload, and RIDL), but also patches for previous exploits like Spectre and Meltdown. Because of this, AMD also has lost some performance with mitigations enabled (because AMD is vulnerable to some Spectre variants), but only 3%.
Have you disabled hyperthreading?
(Score: 3, Informative) by takyon on Monday May 20 2019, @01:56PM (5 children)
This is where we are heading:
https://www.darpa.mil/attachments/3DSoCProposersDay20170915.pdf [darpa.mil]
In the interim, 2.5D/3D DRAM stacking will result in a performance and efficiency increase:
https://www.tomshardware.com/news/amd-3d-memory-stacking-dram,38838.html [tomshardware.com]
You can think of it as L4 cache if you want.
As for risking shelf life, AC doesn't need to throw out their i7-3930K system. That is a 6-core chip from 2011, and upcoming CPUs will blow it out of the water in certain workloads:
https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-3930K+%40+3.20GHz&id=902 [cpubenchmark.net]
https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadripper+1950X&id=3058 [cpubenchmark.net]
https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadripper+2950X&id=3316 [cpubenchmark.net]
An upcoming 16-core Ryzen 9 will probably beat Threadripper 1950X and maybe 2950X. It will also beat i7-3930K, even on single threaded performance.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 4, Interesting) by RamiK on Monday May 20 2019, @04:39PM (4 children)
This is like reading 70s super-cars ads... Look, I'm not arguing the Porsche isn't faster than an SUV. I'm arguing it's all the same standing in traffic only the Porsche costs more. Those stacked-RAM whatnots are very impressive feats of engineering and no one is arguing they'll come in handy in servers and possibly even autonomous cars. But the tech scaled down to consumer electronics means little since the RAM's speed just doensn't matter all that much beyond a certain point. The extra capacity will help like in how Wayland allocates a buffer for every window as opposed to X which uses only the 1 buffer resulting in less context switching locks... And if you can really dump tons of RAM cheaply you could start thinking about old-school fat-pointers and capability-based designs again... But waiting a couple of years for an x86 with stacked-DRAM on the die that will likely suffer from reduced life span? No thanks.
Look, the 2017 Darpa paper you've linked talks about 90nm logic on 200nm wafers so it's obviously appealing for mobile as well as pC SoCs costs wise. But it's not going to be some huge revolutionary thing. It's going to be another 5% yearly incremental bump. And one that's going to come at many costs. Not saying it won't happen. Just saying it's nothing worth waiting for.
compiling...
(Score: 3, Interesting) by takyon on Monday May 20 2019, @05:50PM (3 children)
Are you sure putting DRAM on/near cores is going to increase its failure rate significantly? Did Intel's eDRAM have this problem?
Waiting at least a couple of years make sense because you and AC already agree that AC has a decent CPU, but AC is thinking of upgrading. I'm just forecasting what will be available in a reasonable time frame. AC could wait 0 years, 2 months and get a 16-core AMD chip that would more than double performance in some cases. Add another 2 years and you could see 1-2 better versions, including the stacked DRAM thing, as well as the original 16-core Ryzens going on sale. Maybe $300 instead of $500.
The DARPA paper shows performance increases of up to around 20x (2,000%) along with dramatic energy efficiency increases. Less for the 90nm vs 7nm comparison, but still handily outperforming 7nm. This is no 5% incremental bump. It's not coming in the next 2 years either, but don't let it be said that there isn't more performance to be squeezed out of CPUs.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by RamiK on Monday May 20 2019, @08:22PM (2 children)
I'm not sure about anything. I'm saying being an early adopter is never a good deal unless you have specific loads significantly benefiting from it. And personally, as a consumer, I have none.
All I'm reading is the usual wait-and-see if the next Elder Scrolls runs fast enough / if the computer stops booting. Which is fine and reasonable. But waiting just because some supposed breakthrough is right around the corner? Pointless.
Let me tell what's going to happen when stacked DRAM hits the CPU market: Intel will reduce production costs while increasing 5% performance and power while segmenting the good stuff to the high-end servers. And they'll get away with it just like nVidia got away with this for the simple reason that AMD knows if they're serious they'll wipe the floor with them simply by competing over price and letting some other parties license their x86 / GPU stuff so they'd avoid being declared a monopoly.
compiling...
(Score: 2) by takyon on Monday May 20 2019, @08:42PM (1 child)
Intel has had years to get serious against AMD. Instead, AMD's market share is increasing in all segments, even before the general release of Zen 2:
https://venturebeat.com/2019/04/30/amd-gained-market-share-for-6th-straight-quarter-ceo-says/ [venturebeat.com]
https://www.fool.com/investing/2019/05/18/amds-data-center-dominance-could-send-the-stock-hi.aspx [fool.com]
And we are still on Intel's 14nm++++++++++++++ node.
To be clear, Intel's true competition is TSMC, and to a lesser degree, Samsung. Intel is starting to feel the pain of owning its own fabs and sucking at it. AMD's move to become fabless was mocked back in the day, but now they are profiting from it.
Intel's "14nm" process is so mature and "10nm" yields are so bad that they probably can't respond effectively to AMD's Zen 2. And AMD has usually been the price/performance leader, even when they couldn't match Intel's performance at all. Now AMD has the opportunity to lead on both price and performance.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 2) by RamiK on Monday May 20 2019, @09:11PM
Again, they don't want to since they need AMD around to avoid being branded as a monopoly. If I had to guess, they'd be fine with AMD taking over 15% of the x86 market so long as it's in the segments bordering on ARM's encroachment.
compiling...