Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Monday May 20 2019, @04:28AM   Printer-friendly
from the TANSTAAFL dept.

Intel Loses 5X More Average Performance Than AMD From Mitigations: Report

Intel has published its own set of benchmark results for the mitigations to the latest round of vulnerabilities, but Phoronix, a publication that focuses on Linux-related news and reviews, has conducted its own testing and found a significant impact. Phoronix's recent testing of all mitigations in Linux found the fixes reduce Intel's performance by 16% (on average) with Hyper-Threading enabled, while AMD only suffers a 3% average loss. Phoronix derived these percentages from the geometric mean of test results from its entire test suite.

From a performance perspective, the overhead of the mitigations narrow the gap between Intel and AMD's processors. Intel's chips can suffer even more with Hyper-Threading (HT) disabled, a measure that some companies (such as Apple and Google) say is the only way to make Intel processors completely safe from the latest vulnerabilities. In some of Phoronix's testing, disabling HT reduced performance almost 50%. The difference was not that great in many cases, but the gap did widen in almost every test by at least a few points.

To be clear, this is not just testing with mitigations for MDS (also known as Fallout, Zombieload, and RIDL), but also patches for previous exploits like Spectre and Meltdown. Because of this, AMD also has lost some performance with mitigations enabled (because AMD is vulnerable to some Spectre variants), but only 3%.

Have you disabled hyperthreading?


Original Submission

Related Stories

Intel Internal Memo Addresses AMD's Zen Success 16 comments

Intel internal memo highlights competitive challenges AMD poses

A recent post on Intel's employee-only portal titled, "AMD competitive profile: Where we go toe-to-toe, why they are resurgent, which chips of ours beat theirs," has found its way to Reddit and offers a fascinating glimpse into how Intel perceives one of its largest competitors and the challenges it is posing to some of its divisions.

[...] Penned by Walden Kirsch as part of "the latest in a Circuit News series on Intel's major competitors," the piece notes how AMD was the best-performing stock on the S&P 500 last year and enjoyed its second straight year of greater than 20 percent annual revenue growth in 2018. One of the reasons for AMD's resurgence, Kirsch surmises, is its strategic re-focus on high-performance products in the desktop, datacenter and server markets.

Specifically, Kirsch highlighted AMD's use of TSMC's 7nm manufacturing process, victories in public cloud offerings and its next-gen Zen-core products as factors that will "amplify the near-term competitive challenge from AMD."

[...] The company believes its 9th Gen Core processors will beat AMD's Ryzen-based products in lightly threaded productivity benchmarks as well as in gaming benchmarks. With regard to multi-threaded workloads, Intel said AMD's Matisse "is expected to lead."

Soon to be discontinued internal news series.

See also: Platform Storage Face-Off: AMD Upsets Intel
AMD Ryzen 16 Core 5.2GHz CPU Benchmark Leaked, Crushes Intel's i9 9980XE
AMD Ryzen 5 3600 6 Core, 12 Thread CPU Tested on X470 Platform – Single-Core Performance On Par With The Core i9-9900K
AMD Ryzen 7 3800X Benchmarks Leaked, Crushes Intel's i9 9900K in Multi-threaded Performance

Related: Intel's Processors Lose More Performance From Vulnerability Mitigations Than AMD's
AMD and Intel at Computex 2019: First Ryzen 3000-Series CPUs and Navi GPU Announced
HP Boss: Intel Shortages are Steering Our Suited Customers to Buy AMD
AMD Details Three Navi GPUs and First Mainstream 16-Core CPU


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: -1, Troll) by Anonymous Coward on Monday May 20 2019, @05:33AM

    by Anonymous Coward on Monday May 20 2019, @05:33AM (#845431)

    You know, we could be commenting, on stuff, if not for the censorship and shit. Just saying.

  • (Score: 0) by Anonymous Coward on Monday May 20 2019, @05:47AM (1 child)

    by Anonymous Coward on Monday May 20 2019, @05:47AM (#845432)

    no, i like computing dirty.

    • (Score: 0) by Anonymous Coward on Monday May 20 2019, @11:11AM

      by Anonymous Coward on Monday May 20 2019, @11:11AM (#845471)

      It's called promiscuous, not dirty, you insensitive clod!

  • (Score: 4, Interesting) by Runaway1956 on Monday May 20 2019, @06:17AM (3 children)

    by Runaway1956 (2926) Subscriber Badge on Monday May 20 2019, @06:17AM (#845436) Homepage Journal

    It couldn't happen to a better bunch of people. These vulnerabilities have nothing to do with me abandoning Intel years ago - but they go a long way toward justifying my decision.

    I wonder - - - if Intel concerned itself with actually improving their chips, and stopped trying to "manage" the chips they sold, where might they be today?

    An article from 1999, demonstrating that Intel's priorities were in the wrong place, even then - https://www.schneier.com/essays/archives/1999/01/intels_processor_id.html [schneier.com]

    Intel's Processor ID
    Bruce Schneier
    ZDNet News
    January 26, 1999
    Last month Intel Corp. announced that its new processor chips would come equipped with ID numbers, a unique serial number burned into the chip during manufacture. Intel said that this ID number will help facilitate e-commerce, prevent fraud and promote digital content protection.

    Unfortunately, it doesn't do any of these things.

    To see the problem, consider this analogy: Imagine that every person was issued a unique identification number on a national ID card. A person would have to show this card in order to engage in commerce, get medical care, whatever. Such a system works, provided that the merchant, doctor, or whoever can examine the card and verify that it hasn't been forged. Now imagine that the merchants were not allowed to examine the card. They had to ask the person for his ID number, and then accept whatever number the person responded with. This system is only secure if you trust what the person says.

    The same problem exists with the Intel scheme.

    Note that Schneier skips right over the evil of national identity cards . . .

    --
    Don’t confuse the news with the truth.
    • (Score: 1, Touché) by Anonymous Coward on Monday May 20 2019, @09:53AM

      by Anonymous Coward on Monday May 20 2019, @09:53AM (#845463)

      Note that Schneier skips right over the evil of national identity cards . . .

      Without national identity cards, you'll get more spies intruded freely in your nation.

    • (Score: 2) by RS3 on Monday May 20 2019, @05:49PM (1 child)

      by RS3 (6367) on Monday May 20 2019, @05:49PM (#845570)

      To me it's an all-to-common case of corporate greed. It's fairly well-known that you'll make more profit by spending $ on advertising rather than product improvement. Freaking Intel runs major television ads and sponsors major high-society sports and events. Great, but your chips are crap!

      For sure CPUs are better and faster, but I'd rather see the $ invested in testing and QC. There are people who are very good at finding design flaws, weaknesses, etc. Corporate management is always under pressure to maximize profits, and fixing a problem is never ever looked at as being a positive. "You're moving backwards!" or "Do you want to eat?" are some of the responses I've heard and even gotten.

      I don't think I've ever bought an Intel chip outright. I've bought (used, cheap) computers with Intel chips, but never new. I remember buying an NEC V30 speedup replacement for 8086, and when I built my first 386, I bought the AMD 40 MHz chip, and some Cyrix CPUs when they existed, but never Intel. I always wished TI would have gotten into CPU market. Their DSP chips were always considered awesome.

      --
      Experience enables you to recognize a mistake every time you repeat it.
      • (Score: 2) by Runaway1956 on Monday May 20 2019, @06:10PM

        by Runaway1956 (2926) Subscriber Badge on Monday May 20 2019, @06:10PM (#845577) Homepage Journal

        I agree that TI should have gone into the market. It always seemed a logical step.

        --
        Don’t confuse the news with the truth.
  • (Score: 0) by Anonymous Coward on Monday May 20 2019, @07:29AM (9 children)

    by Anonymous Coward on Monday May 20 2019, @07:29AM (#845440)

    I am still waiting for my i7-3930K to become obsolete, but seeing AMD catching up to Intel, probably switch back next upgrade.

    • (Score: 5, Interesting) by takyon on Monday May 20 2019, @07:54AM (8 children)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday May 20 2019, @07:54AM (#845444) Journal

      Catch the benchmarks in a couple of months. AMD might match Intel core for core, and will obliterate on the pricing for 6-16 cores.

      Sounds like you could wait another 2+ years for something big to drop, like stacked DRAM on the processors.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 3, Insightful) by RamiK on Monday May 20 2019, @11:07AM (7 children)

        by RamiK (1813) on Monday May 20 2019, @11:07AM (#845469)

        wait another 2+ years for something big to drop, like stacked DRAM on the processors

        You know, I don't know about you, but outside the context of graphics compute and databases, I've never seen a RAM speed bottleneck.

        I guess it will make the system cheaper. And I suppose it should be more power efficient... But considering current compute progress, why should anyone risk the shelf life of a system that is likely going to be good enough for well over a decade with a first-gen product?

        Honestly, I just don't get it.

        --
        compiling...
        • (Score: -1, Troll) by Anonymous Coward on Monday May 20 2019, @11:15AM

          by Anonymous Coward on Monday May 20 2019, @11:15AM (#845472)

          Honestly, I just don't get it.

          If honestly you fail, try dishonestly next time.
          You know, the "doing the same and expecting different results"...

        • (Score: 3, Informative) by takyon on Monday May 20 2019, @01:56PM (5 children)

          by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday May 20 2019, @01:56PM (#845507) Journal

          This is where we are heading:

          https://www.darpa.mil/attachments/3DSoCProposersDay20170915.pdf [darpa.mil]

          In the interim, 2.5D/3D DRAM stacking will result in a performance and efficiency increase:

          https://www.tomshardware.com/news/amd-3d-memory-stacking-dram,38838.html [tomshardware.com]

          You can think of it as L4 cache if you want.

          As for risking shelf life, AC doesn't need to throw out their i7-3930K system. That is a 6-core chip from 2011, and upcoming CPUs will blow it out of the water in certain workloads:

          https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i7-3930K+%40+3.20GHz&id=902 [cpubenchmark.net]
          https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadripper+1950X&id=3058 [cpubenchmark.net]
          https://www.cpubenchmark.net/cpu.php?cpu=AMD+Ryzen+Threadripper+2950X&id=3316 [cpubenchmark.net]

          An upcoming 16-core Ryzen 9 will probably beat Threadripper 1950X and maybe 2950X. It will also beat i7-3930K, even on single threaded performance.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 4, Interesting) by RamiK on Monday May 20 2019, @04:39PM (4 children)

            by RamiK (1813) on Monday May 20 2019, @04:39PM (#845554)

            This is like reading 70s super-cars ads... Look, I'm not arguing the Porsche isn't faster than an SUV. I'm arguing it's all the same standing in traffic only the Porsche costs more. Those stacked-RAM whatnots are very impressive feats of engineering and no one is arguing they'll come in handy in servers and possibly even autonomous cars. But the tech scaled down to consumer electronics means little since the RAM's speed just doensn't matter all that much beyond a certain point. The extra capacity will help like in how Wayland allocates a buffer for every window as opposed to X which uses only the 1 buffer resulting in less context switching locks... And if you can really dump tons of RAM cheaply you could start thinking about old-school fat-pointers and capability-based designs again... But waiting a couple of years for an x86 with stacked-DRAM on the die that will likely suffer from reduced life span? No thanks.

            Look, the 2017 Darpa paper you've linked talks about 90nm logic on 200nm wafers so it's obviously appealing for mobile as well as pC SoCs costs wise. But it's not going to be some huge revolutionary thing. It's going to be another 5% yearly incremental bump. And one that's going to come at many costs. Not saying it won't happen. Just saying it's nothing worth waiting for.

            --
            compiling...
            • (Score: 3, Interesting) by takyon on Monday May 20 2019, @05:50PM (3 children)

              by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday May 20 2019, @05:50PM (#845572) Journal

              Are you sure putting DRAM on/near cores is going to increase its failure rate significantly? Did Intel's eDRAM have this problem?

              Waiting at least a couple of years make sense because you and AC already agree that AC has a decent CPU, but AC is thinking of upgrading. I'm just forecasting what will be available in a reasonable time frame. AC could wait 0 years, 2 months and get a 16-core AMD chip that would more than double performance in some cases. Add another 2 years and you could see 1-2 better versions, including the stacked DRAM thing, as well as the original 16-core Ryzens going on sale. Maybe $300 instead of $500.

              The DARPA paper shows performance increases of up to around 20x (2,000%) along with dramatic energy efficiency increases. Less for the 90nm vs 7nm comparison, but still handily outperforming 7nm. This is no 5% incremental bump. It's not coming in the next 2 years either, but don't let it be said that there isn't more performance to be squeezed out of CPUs.

              --
              [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
              • (Score: 2) by RamiK on Monday May 20 2019, @08:22PM (2 children)

                by RamiK (1813) on Monday May 20 2019, @08:22PM (#845620)

                Are you sure putting DRAM on/near cores is going to increase its failure rate significantly? Did Intel's eDRAM have this problem?

                I'm not sure about anything. I'm saying being an early adopter is never a good deal unless you have specific loads significantly benefiting from it. And personally, as a consumer, I have none.

                I am still waiting for my i7-3930K to become obsolete...

                but AC is thinking of upgrading.

                All I'm reading is the usual wait-and-see if the next Elder Scrolls runs fast enough / if the computer stops booting. Which is fine and reasonable. But waiting just because some supposed breakthrough is right around the corner? Pointless.

                The DARPA paper shows...This is no 5% incremental bump

                Let me tell what's going to happen when stacked DRAM hits the CPU market: Intel will reduce production costs while increasing 5% performance and power while segmenting the good stuff to the high-end servers. And they'll get away with it just like nVidia got away with this for the simple reason that AMD knows if they're serious they'll wipe the floor with them simply by competing over price and letting some other parties license their x86 / GPU stuff so they'd avoid being declared a monopoly.

                --
                compiling...
                • (Score: 2) by takyon on Monday May 20 2019, @08:42PM (1 child)

                  by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Monday May 20 2019, @08:42PM (#845633) Journal

                  Let me tell what's going to happen when stacked DRAM hits the CPU market: Intel will reduce production costs while increasing 5% performance and power while segmenting the good stuff to the high-end servers. And they'll get away with it just like nVidia got away with this for the simple reason that AMD knows if they're serious they'll wipe the floor with them simply by competing over price and letting some other parties license their x86 / GPU stuff so they'd avoid being declared a monopoly.

                  Intel has had years to get serious against AMD. Instead, AMD's market share is increasing in all segments, even before the general release of Zen 2:

                  https://venturebeat.com/2019/04/30/amd-gained-market-share-for-6th-straight-quarter-ceo-says/ [venturebeat.com]
                  https://www.fool.com/investing/2019/05/18/amds-data-center-dominance-could-send-the-stock-hi.aspx [fool.com]

                  And we are still on Intel's 14nm++++++++++++++ node.

                  To be clear, Intel's true competition is TSMC, and to a lesser degree, Samsung. Intel is starting to feel the pain of owning its own fabs and sucking at it. AMD's move to become fabless was mocked back in the day, but now they are profiting from it.

                  Intel's "14nm" process is so mature and "10nm" yields are so bad that they probably can't respond effectively to AMD's Zen 2. And AMD has usually been the price/performance leader, even when they couldn't match Intel's performance at all. Now AMD has the opportunity to lead on both price and performance.

                  --
                  [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
                  • (Score: 2) by RamiK on Monday May 20 2019, @09:11PM

                    by RamiK (1813) on Monday May 20 2019, @09:11PM (#845641)

                    Intel has had years to get serious against AMD.

                    Again, they don't want to since they need AMD around to avoid being branded as a monopoly. If I had to guess, they'd be fine with AMD taking over 15% of the x86 market so long as it's in the segments bordering on ARM's encroachment.

                    --
                    compiling...
  • (Score: 0) by Anonymous Coward on Monday May 20 2019, @11:18AM

    by Anonymous Coward on Monday May 20 2019, @11:18AM (#845474)

    If I learned something growing old it's this: it doesn't matter how small it became, the question is "Is it still large enough to satisfy"?

  • (Score: 3, Insightful) by shortscreen on Monday May 20 2019, @01:23PM (1 child)

    by shortscreen (2252) on Monday May 20 2019, @01:23PM (#845499) Journal

    Old home computers and DOS-based systems didn't have the concept of multiple users. And I liked it that way. They were called "personal computers" and they usually only had one user so it made sense.

    But then Win NT copied the idea of multiple users from VMS or *nix or whereever and it was foisted on everyone. Although I never accepted it. I still run as admin on FAT32 so I that I NEVER have to enter a password to access my own disk. But "they" said we shouldn't do that because we needed to run with reduced permissions for security. So basically, it is assumed that the user will inevitably run malicious code and limiting the user's actions is somehow the solution to this, as if the user's files being trashed or leaked is less bad than than the same thing happening to standard OS or application files which can simply be reinstalled from whatever medium. And in the case of Windows, that's before the OS itself started shipping with adware, spyware, and DRM built in, raising the question of what is being "secured" from who.

    The multi-user model is now breaking down due to bad assumptions about the hardware, but the security strategy for a single-user system is not affected. That strategy being: don't run malicious code.

    • (Score: 4, Insightful) by EEMac on Monday May 20 2019, @01:56PM

      by EEMac (6423) on Monday May 20 2019, @01:56PM (#845508)

      don't run malicious code.

      I wish this was still an option, but tons of web sites break outright if you turn off JavaScript. Ad blockers help but don't completely eliminate the problem.

  • (Score: 5, Interesting) by Subsentient on Monday May 20 2019, @02:08PM (3 children)

    by Subsentient (1111) on Monday May 20 2019, @02:08PM (#845514) Homepage Journal

    Looks like I made a really good choice building a Ryzen desktop this month. AMD is definitely back in the game, even before Spectre/Meltdown. I've overclocked my shitty $90 Ryzen 3 with an iGPU by 400Mhz, because it was unlocked. Let that sink in.

    --
    "It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
    • (Score: 3, Insightful) by HiThere on Monday May 20 2019, @04:33PM (2 children)

      by HiThere (866) on Monday May 20 2019, @04:33PM (#845551) Journal

      Unless you've got a good reason for overclocking, that's usually an unwise move. It increases the error rate and shortens the lifetime of the electronics. It's much better to get a system that doesn't NEED to be overclocked to do what you need.

      That said, it's a good string to have for your bow for when you need it. It's just that use it too often and you'll break the bow. (Being unlocked has other good features, however.)

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by Spamalope on Tuesday May 21 2019, @12:36AM

        by Spamalope (5233) on Tuesday May 21 2019, @12:36AM (#845679) Homepage

        If you're worried about reliability, and you're looking for extra value in the middle/low tier vs besting the top of the line parts you can get some gains easily.
        Find a core that's being binned to product segment and is also getting good yields. Overclock by forcing the same settings the top binned part has. Many to most of the parts will work fine, they just haven't been tested or warrantied for that speed and you know it's not beyond the specs AMD/Intel thinks is ok for the chip - then you can try undervolting if that goes well and you want to tune some...

      • (Score: 4, Insightful) by takyon on Tuesday May 21 2019, @12:56PM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Tuesday May 21 2019, @12:56PM (#845785) Journal

        $90 Ryzen 3 is practically disposable. A Zen 2 chip could be dropped in without replacing the motherboard, although a separate GPU might be needed.

        Today's overclocking is likely to rely on tools given by the company and the CPU contains sensors that automatically adjust frequencies to match the cooling situation, and/or prevent manual overclocking from doing serious damage.

        Looks like Subsentient has the Ryzen 3 2200G. Overclocking +400 MHz from the base clock is trivial.

        https://www.anandtech.com/show/12542/overclocking-the-amd-ryzen-apus-guide-results/8 [anandtech.com]

        Subsentient can wait a few years, pick up another AM4 socket CPU on sale (Zen 2, maybe Zen 3 if that is also AM4), and get more cores and better performance even before overclocking.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0) by Anonymous Coward on Monday May 20 2019, @08:30PM

    by Anonymous Coward on Monday May 20 2019, @08:30PM (#845627)

    Comments from previous story [soylentnews.org], as it seems tests only care about newer, even if older computer are still useful*:

    "Desktop, Laptop, and Cloud computers may be affected. More technically, we only verified the ZombieLoad attack on Intel processor generations released from 2011 onwards." taken from zombieload page.

    https://software.intel.com/security-software-guidance/insights/deep-dive-intel-analysis-microarchitectural-data-sampling [intel.com]
    https://software.intel.com/security-software-guidance/insights/deep-dive-cpuid-enumeration-and-architectural-msrs#MDS-CPUID [intel.com]
    https://en.wikipedia.org/wiki/Nehalem_(microarchitecture) [wikipedia.org]
    So CPUs from 2008, not 2011. "Luckly" those old CPUs have to clean up smaller buffers (672 bytes) and they should do it "faster".

    *: for values of useful that include running ad blockers but not bloatware, and of course unclear about security workarounds impact.

  • (Score: 2) by RedGreen on Tuesday May 21 2019, @12:22AM (1 child)

    by RedGreen (888) on Tuesday May 21 2019, @12:22AM (#845677)

    No I am too cheap to buy the processor with it enabled, I would think this means I am safe from this junk.

    --
    "I modded down, down, down, and the flames went higher." -- Sven Olsen
    • (Score: 2) by toddestan on Tuesday May 21 2019, @02:17AM

      by toddestan (4982) on Tuesday May 21 2019, @02:17AM (#845699)

      Actually, a lot of Intel's lower end processors have Hyperthreading. They then turn it off for many of the mid-range chips, then it's back on for most of the high end chips.

      And no, I don't get it either.

(1)