Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday March 14 2018, @12:34PM   Printer-friendly
from the I'm-going-back-to-using-an-Abacus dept.

Security Researchers Publish Ryzen Flaws, Gave AMD 24 hours Prior Notice

Through the advent of Meltdown and Spectre, there is a heightened element of nervousness around potential security flaws in modern high-performance processors, especially those that deal with the core and critical components of company business and international infrastructure. Today, CTS-Labs, a security company based in Israel, has published a whitepaper identifying four classes of potential vulnerabilities of the Ryzen, EPYC, Ryzen Pro, and Ryzen Mobile processor lines. AMD is in the process of responding to the claims, but was only given 24 hours of notice rather than the typical 90 days for standard vulnerability disclosure. No official reason was given for the shortened time.

[...] At this point AMD has not confirmed any of the issues brought forth in the CTS-Labs whitepaper, so we cannot confirm in the findings are accurate. It has been brought to our attention that some press were pre-briefed on the issue, perhaps before AMD was notified, and that the website that CTS-Labs has setup for the issue was registered on February 22nd, several weeks ago. Given the level of graphics on the site, it does look like a planned 'announcement' has been in the works for a little while, seemingly with little regard for AMD's response on the issue. This is compared to Meltdown and Spectre, which was shared among the affected companies several months before a planned public disclosure. CTS-Labs has also hired a PR firm to deal with incoming requests for information, which is also an interesting avenue to the story, as this is normally not the route these security companies take. CTS-Labs is a security focused research firm, but does not disclose its customers or research leading to this disclosure. CTS-Labs was started in 2017, and this is their first public report.

CTS-Labs' claims revolve around AMD's Secure Processor and Promontory Chipset, and fall into four main categories, which CTS-Labs has named for maximum effect. Each category has sub-sections within.

Severe Security Advisory on AMD Processors from CTS.

Also at Tom's Hardware, Motherboard, BGR, Reuters, and Ars Technica.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by The Mighty Buzzard on Wednesday March 14 2018, @02:24PM (8 children)

    by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Wednesday March 14 2018, @02:24PM (#652380) Homepage Journal

    Maybe I'm wrong, but it seems to me that there are not that many things, especially desktop computer operations, that cannot be done on multiple cores.

    In theory you're not wrong. In practice you are. In theory, there aren't many things that can't be broken down into very simple tasks and done in parallel, asynchronously, or both. In practice, being able to identify them and program that way is not something the majority of programmers are especially good at.

    --
    My rights don't end where your fear begins.
    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by tonyPick on Wednesday March 14 2018, @02:32PM

    by tonyPick (1237) on Wednesday March 14 2018, @02:32PM (#652385) Homepage Journal
  • (Score: 4, Informative) by DannyB on Wednesday March 14 2018, @02:44PM (6 children)

    by DannyB (5839) Subscriber Badge on Wednesday March 14 2018, @02:44PM (#652391) Journal

    It seems that if systems got more and more cores, the incentive to exploit them would drive programmers to become good at this.

    People already go through horrible contortions to exploit the power of GPUs for non-graphics tasks. Imagine if you could use any language(s) you liked, but simply broke your program up to use concurrent threads. There are already various ways to exploit this. Message passing frameworks. Fork/Join operations that are easy to use.

    Here is but one simple practical example of Fork/Join that I used some months back. On a personal side project, unrelated to work. I need to produce a "heat map" type plot on a polar axis (eg, center point, with compass bearing and elevation). I need to process millions or tens of millions of data points to produce this plot. I create a 2D array of "buckets" of average signal strength. Each bucket in the array represents the strength of a pixel (eg color) on the plot. Each bucket has a sum and an average. (Later also min, max, etc) Now simply loop over all of the data points. Each data point lands into exactly one bucket, depending on that data point's compass bearing from the center point and elevation above the horizon. When a data point is added to a bucket, you simply add its signal strength and increment the number-of-points counter. Thus when it comes time to draw the plot, the average is easily computed as total strength divided by number of points.

    Great. But it takes too long to plot. Many seconds. Maybe up to a minute. On a fast machine. Especially as number of data points keeps increasing over time.

    New approach. Do it in parallel. Take, say 30 million points, and break it up into "work units". Let's say half a million points per work unit. Put the work units into a queue. Use Java's Executor framework to create executors (enough so that every core can run one). The framework takes a work unit off the queue and hands it to an executor. Each executor is a pure function. It takes the work unit (a set of data points) and produces a single result, the 2D array of bucket averages. The results are put into an output queue. Another process reduces the 2D arrays by smashing them together. Corresponding points of each bucket in 2 arrays are accumulated together, until ultimately you have one single 2D array that represents the entire result.

    The speedup was dramatic. You could see all of the cpu cores light up. The whole thing was done in a few seconds. Now it was easy to alter plot parameters and almost immediately see drawn results.

    I used easy to use frameworks in Java to do this. (and Swing.)

    It's just a change in the way of thinking. Programmers need to start thinking like this. It is the future. There is only so fast you can make CPUs go. But we can continue adding more and more transistors. So what will happen? Either we'll keep making single cores have way more transistors, or we'll eventually start making more and more cores. Cheaper and cheaper.

    --
    The lower I set my standards the more accomplishments I have.
    • (Score: 2) by DannyB on Wednesday March 14 2018, @02:46PM

      by DannyB (5839) Subscriber Badge on Wednesday March 14 2018, @02:46PM (#652392) Journal

      Just to add: My new approach was literally a map/reduce. Each work unit was "mapped" by a function. So transform one list into another list. And then reduce pairwise items in the list until a single 2D array remains.

      --
      The lower I set my standards the more accomplishments I have.
    • (Score: 2) by The Mighty Buzzard on Wednesday March 14 2018, @03:07PM (4 children)

      by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Wednesday March 14 2018, @03:07PM (#652398) Homepage Journal

      You'd think. Apparently it's just not the way most people's brains work. I mean we have sixteen core desktop processors that can run thirty-two threads at once and not much has changed. I really don't understand why though. Even my IRC bot is multi-threaded and asynchronous. Maybe I'm just weird.

      --
      My rights don't end where your fear begins.
      • (Score: 2) by DannyB on Wednesday March 14 2018, @03:45PM (3 children)

        by DannyB (5839) Subscriber Badge on Wednesday March 14 2018, @03:45PM (#652440) Journal

        Ten years ago I did not find it straightforward to think this way.

        I considered that, logically, having to parallelize is the way of the future. Inevitable, IMO.

        So I begin trying to think this way. I think it like learning to code in the first place. You just have to practice. Maybe early in one's learning, the whole idea of thinking this way needs to be introduced. With examples. And it doesn't hurt if more languages had easy to use frameworks to easily do map/reduce operations easily with ease quite easily.

        --
        The lower I set my standards the more accomplishments I have.
        • (Score: 2) by The Mighty Buzzard on Wednesday March 14 2018, @04:01PM (1 child)

          by The Mighty Buzzard (18) Subscriber Badge <themightybuzzard@proton.me> on Wednesday March 14 2018, @04:01PM (#652450) Homepage Journal

          Well, not everything's big data even today, so map reduce proficiency would be of limited usefulness to programmers as a whole. Being able to write your program in such a way as to eliminate threading bottlenecks though, that should be required for every code monkey's mental toolbox.

          --
          My rights don't end where your fear begins.
          • (Score: 2) by DannyB on Wednesday March 14 2018, @04:38PM

            by DannyB (5839) Subscriber Badge on Wednesday March 14 2018, @04:38PM (#652479) Journal

            Map / Reduce is not just for big data.

            It is something any Lisp programmer understands, long before big data.

            I just gave an example where I did map/reduce in a desktop GUI application. (I mentioned "Swing" on Java) And got a dramatic performance improvement.

            I increasingly see applications of the technique without big data.

            (Unless you consider my input data file of tens of millions of data points to be big data.)

            Using map / reduce, or message passing frameworks are both ways for an average code monkey to write correct multi-threaded code. Part of this is to have higher order languages that provide suitable abstractions.

            --
            The lower I set my standards the more accomplishments I have.
        • (Score: 3, Insightful) by TheRaven on Wednesday March 14 2018, @05:03PM

          by TheRaven (270) on Wednesday March 14 2018, @05:03PM (#652496) Journal
          I think that the difficult thing is not writing parallel code or writing serial code, but writing code that is mostly serial but has some parallel parts. If you start by making everything that is logically independent into a parallel task and use actor-model or CSP communication, then it's quite easy to express most problems. It's then very difficult to statically determine which bits want to be combined into a single serial task for best performance. The strength of something like Erlang is that it encourages you to think in this way (a message send in Erlang is about as cheap as a function call and creating a new parallel task isn't much more expensive) and then dynamically combines tasks into sequential operations for your processor.
          --
          sudo mod me up