Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by Fnord666 on Wednesday January 31 2018, @05:14PM   Printer-friendly
from the doesn't-raid-fix-this? dept.

Arthur T Knackerbracket has found the following story:

In 2015, Microsoft senior engineer Dan Luu forecast a bountiful harvest of chip bugs in the years ahead.

"We've seen at least two serious bugs in Intel CPUs in the last quarter, and it's almost certain there are more bugs lurking," he wrote. "There was a time when a CPU family might only have one bug per year, with serious bugs happening once every few years, or even once a decade, but we've moved past that."

Thanks to growing chip complexity, compounded by hardware virtualization, and reduced design validation efforts, Luu argued, the incidence of hardware problems could be expected to increase.

This month's Meltdown and Spectre security flaws that affect chip designs from AMD, Arm, and Intel to varying degrees support that claim. But there are many other examples.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by khallow on Thursday February 01 2018, @02:35AM (4 children)

    by khallow (3766) Subscriber Badge on Thursday February 01 2018, @02:35AM (#631302) Journal

    I mean, can we at least have a reasonable option for a validated processor that works

    How would validation catch the Spectre [wikipedia.org] bug? It's derived from subtle observation of memory caching and timing delays of the cache queues. Can't validate what you don't know you need to validate. Even if the CPU manufacturers fully fix this one, how will we validate all possible interactions of the internal components of the CPU?

  • (Score: 2) by JoeMerchant on Thursday February 01 2018, @04:03AM

    by JoeMerchant (3937) on Thursday February 01 2018, @04:03AM (#631337)

    How would validation catch the Spectre bug?

    In our industry we have a fancy acronym that means: get a bunch of people who know something about the issues, force them to sit in a room and seriously consider them at least long enough to write a report and file it. Lately, there's a lot of handwringing around cybersecurity, and I'm constantly pinged by the junior guys who get worried about X, Y, or Z - and 9 times out of 10 it's nothing, but once in a while they bring up a good point, and some of those good points are things like Spectre - things nobody had considered before. Our development process on a single product goes on for a couple of years, the process calls for these cybersecurity design reviews periodically throughout those years, and over that time people do actually come up with this stuff. So, our reports analyze X, Y, and Z, and either write them off as adequately handled, or shut down the project until they are.

    The real problem is culture - like the Shuttle launch culture that couldn't be stopped for handwringing over ice in the O-rings, or a big corporate culture that doesn't want to pay its own engineers to discover vulnerabilities in the product early enough to fix them before the rest of the world.

    I just gave a mini-speech today that included: "it needs to be tested, if we don't test it our customers will."

    Can't validate what you don't know you need to validate.

    No, you can't - but, as world leading experts in the field you should be able to figure out most of the things you need to validate before the world figures them out for you. In the case of processors that serve separate users partitioned by hypervisor, the industry could have (and likely did) think of this exploit before the hacker community. As soon as they thought of it, they should have (and likely did not) feed that knowledge back into the design process to work out effective fixes for the next generation of processors.

    --
    🌻🌻 [google.com]
  • (Score: 1) by pTamok on Thursday February 01 2018, @09:46AM (2 children)

    by pTamok (3042) on Thursday February 01 2018, @09:46AM (#631391)

    Techniques for provably secure hardware from the gate-level and up are known. For various reasons they are not applied.

    e.g. 2011: Design and Verification of Information Flow Secure Systems [utexas.edu]

    We show that it is possible to construct hardware-software systems whose implementations are verifiably free from all illegal information flows. This work is motivated by high assurance systems such as aircraft, automobiles, banks, and medical devices where secrets should never leak to unclassified outputs or untrusted programs should never affect critical information. Such systems are so complex that, prior to this work, formal statements about the absence of covert and timing channels could only be made about simplified models of a given system instead of the final system implementation.

    and

    2017: Register transfer level information flow tracking for provably secure hardware design [ieee.org]

    That's just one IEEE paper - if you look at the home-page of one of the authors (Wei Hu [ucsd.edu]), you can see many other papers in pdf format, including the full text of the above IEEE reference [ucsd.edu]. There are plenty of references to earlier work listed in that paper.

    Note that hardware can be messed with below the gate-level. Nonetheless, techniques for validating processors have been around for decades, they have 'simply' not been used in the general commercial market as they have been regarded as too time-consuming, expensive, or resource hungry. Military and aerospace markets have had different priorities. High Assurance, as a discipline, has been around for a very long time.

    • (Score: 1) by khallow on Friday February 02 2018, @05:27PM (1 child)

      by khallow (3766) Subscriber Badge on Friday February 02 2018, @05:27PM (#632063) Journal

      Nonetheless, techniques for validating processors have been around for decades, they have 'simply' not been used in the general commercial market as they have been regarded as too time-consuming, expensive, or resource hungry.

      This. The key one is the sheer impracticality of it as a likely NP complete problem, but there are other issues as well.

      Note that hardware can be messed with below the gate-level.

      Hardware can also be messed with above the gate-level. Gates are merely an approximation.

      Finally, an important way to simplify and make more efficient a CPU is to share various sorts of resources. But such sharing increases the number and complexity of interactions between components of the CPU.

      This is not impossible, but I think the value of validation is being overplayed in this thread.

      • (Score: 1) by pTamok on Friday February 02 2018, @07:36PM

        by pTamok (3042) on Friday February 02 2018, @07:36PM (#632120)

        Thanks for the reply. I heartily recommend the first reference I gave. Give it a read - it is not overly technical.

        You are likely right that the general problem is probably NP-complete: or at least difficult, if you assume things like unbounded memory and unbounded state-tables. However, if you place bounds on such things, the problem becomes tractable.

        I put 'simply' in scare quotes because cost is a driver to the bottom as far as commercial business systems are concerned. If a business can make a short-term gain by ignoring security requirements, it will. You can keep the plates spinning for a while...

        It is not impossible to produce formally-proven systems, merely difficult, and you have to be discerning about your axioms. As long as people choose cheapness over correctness, we will continue to have problems like Meltdown, Spectre, and multifarious side-channel attacks. It probably doesn't matter for most business systems, but aerospace will continue to provide a proving ground for such things, hopefully followed by medical applications (do you want your pacemaker to be hackable?). I hope that at some point in the future, the benefit of formally-proven systems will outweigh the cost-increment over the slapdash approach currently used. I don't think that time will come soon, unfortunately.