Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday April 04 2018, @08:46AM   Printer-friendly
from the defect-closed-will-not-fix dept.

It seems Intel has had some second thoughts about Spectre 2 microcode fixes:

Intel has issued new a new "microcode revision guidance" that confesses it won't address the Meltdown and Spectre design flaws in all of its vulnerable processors – in some cases because it's too tricky to remove the Spectre v2 class of vulnerabilities.

The new guidance (pdf), issued April 2, adds a "stopped" status to Intel's "production status" category in its array of available Meltdown and Spectre security updates. "Stopped" indicates there will be no microcode patch to kill off Meltdown and Spectre.

The guidance explains that a chipset earns "stopped" status because, "after a comprehensive investigation of the microarchitectures and microcode capabilities for these products, Intel has determined to not release microcode updates for these products for one or more reasons."

Those reasons are given as:

  • Micro-architectural characteristics that preclude a practical implementation of features mitigating [Spectre] Variant 2 (CVE-2017-5715)
  • Limited Commercially Available System Software support
  • Based on customer inputs, most of these products are implemented as "closed systems" and therefore are expected to have a lower likelihood of exposure to these vulnerabilities.

Thus, if a chip family falls under one of those categories – such as Intel can't easily fix Spectre v2 in the design, or customers don't think the hardware will be exploited – it gets a "stopped" sticker. To leverage the vulnerabilities, malware needs to be running on a system, so if the computer is totally closed off from the outside world, administrators may feel it's not worth the hassle applying messy microcode, operating system, or application updates.

"Stopped" CPUs that won't therefore get a fix are in the Bloomfield, Bloomfield Xeon, Clarksfield, Gulftown, Harpertown Xeon C0 and E0, Jasper Forest, Penryn/QC, SoFIA 3GR, Wolfdale, Wolfdale Xeon, Yorkfield, and Yorkfield Xeon families. The list includes various Xeons, Core CPUs, Pentiums, Celerons, and Atoms – just about everything Intel makes.

Most [of] the CPUs listed above are oldies that went on sale between 2007 and 2011, so it is likely few remain in normal use.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by requerdanos on Wednesday April 04 2018, @04:24PM (2 children)

    by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @04:24PM (#662539) Journal

    to keep adding performance... CPU manufacturers have been adding tricks like longer instruction pipelines and speculative execution. Which... cannot be allowed... if we care about security.

    There's an old joke that goes "Doc, it hurts when I do this!" "Well, don't do that."

    I remember the 70s, 80s, and early 90s when the prevailing thoughts on security were something like this:

    • It was hard to do and made things harder to get working
    • It was applied only in cases where there was a demonstrated need
    • it was assumed to do things like reduce performance and capability
    • which was okay, because "security" and "performance/capability" were two different goals, and you could pick one.

    Basically, OS makers, whether it was the unixes or DOSes or CP/Ms or Windowses, and software programmers of all and sundry kinds, worked to make good, functional software that did the task at hand with a minimum of bother. If the user/admin wanted to encumber that with security measures, so be it, that was understandable if you had such a need.

    If a perfectly good, working, operating system or piece of software could be attacked and crashed, burned, or compromised, no one was really surprised or alarmed by that, in the same way that no one went around saying that "Bob the Builder is a crappy contractor--look! If you put lit matches on his buildings when accelerants are present, the buildings burn right to the ground! Why didn't he allow for that?"

    In other words, if someone attacked and knocked over a perfectly good stack of software, then the software wasn't at fault--the attacker was.

    Gradually, over time, the attackers grew in number and bad attitude, and our views evolved to the current have-our-cake-and-eat-it-too attitude of wanting things not only functional, fast, and easy, but also completely secure, and right now, and oh yes, for a bargain price if you please.

    Maybe we should retreat a little, and get some perspective. The cheating tricks that make processors faster are, in my opinion, a terrific thing even if they come at the cost of security. It's a trade-off, and I would be happy even if I had to pick between "faster" and "bulletproof-secure".

    Starting Score:    1  point
    Moderation   +4  
       Insightful=1, Interesting=3, Total=4
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @07:06PM (1 child)

    by Anonymous Coward on Wednesday April 04 2018, @07:06PM (#662589)

    The cheating tricks that make processors faster are, in my opinion, a terrific thing even if they come at the cost of security. It's a trade-off, and I would be happy even if I had to pick between "faster" and "bulletproof-secure".

    It's a valid opinion, but be aware that many who have to pick between those don't just decide for themselves, in contrast with earlier days. Pre-Internet, nobody cared much about remote exploitation, and rightfully so. Pre-cloud, nobody cared much about local data exfiltration. But nowadays, your health data could very well be stored on the same node where someone else's Nodejs code is running, or where noob users are browsing unprotected on a remote desktop. That's an entirely different threat model, and requires an equally different approach to computing.

    In time, I hope that if you were CIO of a company and deliberately chose the "faster" option, it may well make your company criminally liable for data leaks. We already have examples of such cases in different industries, why not here?

    • (Score: 1, Flamebait) by requerdanos on Wednesday April 04 2018, @07:59PM

      by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @07:59PM (#662614) Journal

      if you were CIO of a company and deliberately chose the "faster" option

      Depending on the application, this might or might not be the correct choice. As you mention (but forgot by the closing paragraph? Do you have some extreme A.D.D.?), it's a valid opinion, and connections to the public Internet influence which choice would be correct.

      Just because security is more and more commonly important vs. performance, that doesn't make it the right choice 100% of the time. Failing to recognize that and wishing jail time on someone based on the failure is pretty shortsighted.