Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by Fnord666 on Monday May 20 2019, @04:28AM   Printer-friendly
from the TANSTAAFL dept.

Intel Loses 5X More Average Performance Than AMD From Mitigations: Report

Intel has published its own set of benchmark results for the mitigations to the latest round of vulnerabilities, but Phoronix, a publication that focuses on Linux-related news and reviews, has conducted its own testing and found a significant impact. Phoronix's recent testing of all mitigations in Linux found the fixes reduce Intel's performance by 16% (on average) with Hyper-Threading enabled, while AMD only suffers a 3% average loss. Phoronix derived these percentages from the geometric mean of test results from its entire test suite.

From a performance perspective, the overhead of the mitigations narrow the gap between Intel and AMD's processors. Intel's chips can suffer even more with Hyper-Threading (HT) disabled, a measure that some companies (such as Apple and Google) say is the only way to make Intel processors completely safe from the latest vulnerabilities. In some of Phoronix's testing, disabling HT reduced performance almost 50%. The difference was not that great in many cases, but the gap did widen in almost every test by at least a few points.

To be clear, this is not just testing with mitigations for MDS (also known as Fallout, Zombieload, and RIDL), but also patches for previous exploits like Spectre and Meltdown. Because of this, AMD also has lost some performance with mitigations enabled (because AMD is vulnerable to some Spectre variants), but only 3%.

Have you disabled hyperthreading?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by takyon on Monday May 20 2019, @07:54AM (8 children)

    by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday May 20 2019, @07:54AM (#845444) Journal

    Catch the benchmarks in a couple of months. AMD might match Intel core for core, and will obliterate on the pricing for 6-16 cores.

    Sounds like you could wait another 2+ years for something big to drop, like stacked DRAM on the processors.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    Starting Score:    1  point
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 3, Insightful) by RamiK on Monday May 20 2019, @11:07AM (7 children)

    by RamiK (1813) on Monday May 20 2019, @11:07AM (#845469)

    wait another 2+ years for something big to drop, like stacked DRAM on the processors

    You know, I don't know about you, but outside the context of graphics compute and databases, I've never seen a RAM speed bottleneck.

    I guess it will make the system cheaper. And I suppose it should be more power efficient... But considering current compute progress, why should anyone risk the shelf life of a system that is likely going to be good enough for well over a decade with a first-gen product?

    Honestly, I just don't get it.

    --
    compiling...
    • (Score: -1, Troll) by Anonymous Coward on Monday May 20 2019, @11:15AM

      by Anonymous Coward on Monday May 20 2019, @11:15AM (#845472)

      Honestly, I just don't get it.

      If honestly you fail, try dishonestly next time.
      You know, the "doing the same and expecting different results"...

    • (Score: 3, Informative) by takyon on Monday May 20 2019, @01:56PM (5 children)

      by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday May 20 2019, @01:56PM (#845507) Journal

      This is where we are heading:

      https://www.darpa.mil/attachments/3DSoCProposersDay20170915.pdf [darpa.mil]

      In the interim, 2.5D/3D DRAM stacking will result in a performance and efficiency increase:

      https://www.tomshardware.com/news/amd-3d-memory-stacking-dram,38838.html [tomshardware.com]

      You can think of it as L4 cache if you want.

      As for risking shelf life, AC doesn't need to throw out their i7-3930K system. That is a 6-core chip from 2011, and upcoming CPUs will blow it out of the water in certain workloads:

      https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-3930K+%40+3.20GHz&id=902 [cpubenchmark.net]
      https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadripper+1950X&id=3058 [cpubenchmark.net]
      https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadripper+2950X&id=3316 [cpubenchmark.net]

      An upcoming 16-core Ryzen 9 will probably beat Threadripper 1950X and maybe 2950X. It will also beat i7-3930K, even on single threaded performance.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 4, Interesting) by RamiK on Monday May 20 2019, @04:39PM (4 children)

        by RamiK (1813) on Monday May 20 2019, @04:39PM (#845554)

        This is like reading 70s super-cars ads... Look, I'm not arguing the Porsche isn't faster than an SUV. I'm arguing it's all the same standing in traffic only the Porsche costs more. Those stacked-RAM whatnots are very impressive feats of engineering and no one is arguing they'll come in handy in servers and possibly even autonomous cars. But the tech scaled down to consumer electronics means little since the RAM's speed just doensn't matter all that much beyond a certain point. The extra capacity will help like in how Wayland allocates a buffer for every window as opposed to X which uses only the 1 buffer resulting in less context switching locks... And if you can really dump tons of RAM cheaply you could start thinking about old-school fat-pointers and capability-based designs again... But waiting a couple of years for an x86 with stacked-DRAM on the die that will likely suffer from reduced life span? No thanks.

        Look, the 2017 Darpa paper you've linked talks about 90nm logic on 200nm wafers so it's obviously appealing for mobile as well as pC SoCs costs wise. But it's not going to be some huge revolutionary thing. It's going to be another 5% yearly incremental bump. And one that's going to come at many costs. Not saying it won't happen. Just saying it's nothing worth waiting for.

        --
        compiling...
        • (Score: 3, Interesting) by takyon on Monday May 20 2019, @05:50PM (3 children)

          by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday May 20 2019, @05:50PM (#845572) Journal

          Are you sure putting DRAM on/near cores is going to increase its failure rate significantly? Did Intel's eDRAM have this problem?

          Waiting at least a couple of years make sense because you and AC already agree that AC has a decent CPU, but AC is thinking of upgrading. I'm just forecasting what will be available in a reasonable time frame. AC could wait 0 years, 2 months and get a 16-core AMD chip that would more than double performance in some cases. Add another 2 years and you could see 1-2 better versions, including the stacked DRAM thing, as well as the original 16-core Ryzens going on sale. Maybe $300 instead of $500.

          The DARPA paper shows performance increases of up to around 20x (2,000%) along with dramatic energy efficiency increases. Less for the 90nm vs 7nm comparison, but still handily outperforming 7nm. This is no 5% incremental bump. It's not coming in the next 2 years either, but don't let it be said that there isn't more performance to be squeezed out of CPUs.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by RamiK on Monday May 20 2019, @08:22PM (2 children)

            by RamiK (1813) on Monday May 20 2019, @08:22PM (#845620)

            Are you sure putting DRAM on/near cores is going to increase its failure rate significantly? Did Intel's eDRAM have this problem?

            I'm not sure about anything. I'm saying being an early adopter is never a good deal unless you have specific loads significantly benefiting from it. And personally, as a consumer, I have none.

            I am still waiting for my i7-3930K to become obsolete...

            but AC is thinking of upgrading.

            All I'm reading is the usual wait-and-see if the next Elder Scrolls runs fast enough / if the computer stops booting. Which is fine and reasonable. But waiting just because some supposed breakthrough is right around the corner? Pointless.

            The DARPA paper shows...This is no 5% incremental bump

            Let me tell what's going to happen when stacked DRAM hits the CPU market: Intel will reduce production costs while increasing 5% performance and power while segmenting the good stuff to the high-end servers. And they'll get away with it just like nVidia got away with this for the simple reason that AMD knows if they're serious they'll wipe the floor with them simply by competing over price and letting some other parties license their x86 / GPU stuff so they'd avoid being declared a monopoly.

            --
            compiling...
            • (Score: 2) by takyon on Monday May 20 2019, @08:42PM (1 child)

              by takyon (881) <takyonNO@SPAMsoylentnews.org> on Monday May 20 2019, @08:42PM (#845633) Journal

              Let me tell what's going to happen when stacked DRAM hits the CPU market: Intel will reduce production costs while increasing 5% performance and power while segmenting the good stuff to the high-end servers. And they'll get away with it just like nVidia got away with this for the simple reason that AMD knows if they're serious they'll wipe the floor with them simply by competing over price and letting some other parties license their x86 / GPU stuff so they'd avoid being declared a monopoly.

              Intel has had years to get serious against AMD. Instead, AMD's market share is increasing in all segments, even before the general release of Zen 2:

              https://venturebeat.com/2019/04/30/amd-gained-market-share-for-6th-straight-quarter-ceo-says/ [venturebeat.com]
              https://www.fool.com/investing/2019/05/18/amds-data-center-dominance-could-send-the-stock-hi.aspx [fool.com]

              And we are still on Intel's 14nm++++++++++++++ node.

              To be clear, Intel's true competition is TSMC, and to a lesser degree, Samsung. Intel is starting to feel the pain of owning its own fabs and sucking at it. AMD's move to become fabless was mocked back in the day, but now they are profiting from it.

              Intel's "14nm" process is so mature and "10nm" yields are so bad that they probably can't respond effectively to AMD's Zen 2. And AMD has usually been the price/performance leader, even when they couldn't match Intel's performance at all. Now AMD has the opportunity to lead on both price and performance.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
              • (Score: 2) by RamiK on Monday May 20 2019, @09:11PM

                by RamiK (1813) on Monday May 20 2019, @09:11PM (#845641)

                Intel has had years to get serious against AMD.

                Again, they don't want to since they need AMD around to avoid being branded as a monopoly. If I had to guess, they'd be fine with AMD taking over 15% of the x86 market so long as it's in the segments bordering on ARM's encroachment.

                --
                compiling...