Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday April 04 2018, @08:46AM   Printer-friendly
from the defect-closed-will-not-fix dept.

It seems Intel has had some second thoughts about Spectre 2 microcode fixes:

Intel has issued new a new "microcode revision guidance" that confesses it won't address the Meltdown and Spectre design flaws in all of its vulnerable processors – in some cases because it's too tricky to remove the Spectre v2 class of vulnerabilities.

The new guidance (pdf), issued April 2, adds a "stopped" status to Intel's "production status" category in its array of available Meltdown and Spectre security updates. "Stopped" indicates there will be no microcode patch to kill off Meltdown and Spectre.

The guidance explains that a chipset earns "stopped" status because, "after a comprehensive investigation of the microarchitectures and microcode capabilities for these products, Intel has determined to not release microcode updates for these products for one or more reasons."

Those reasons are given as:

  • Micro-architectural characteristics that preclude a practical implementation of features mitigating [Spectre] Variant 2 (CVE-2017-5715)
  • Limited Commercially Available System Software support
  • Based on customer inputs, most of these products are implemented as "closed systems" and therefore are expected to have a lower likelihood of exposure to these vulnerabilities.

Thus, if a chip family falls under one of those categories – such as Intel can't easily fix Spectre v2 in the design, or customers don't think the hardware will be exploited – it gets a "stopped" sticker. To leverage the vulnerabilities, malware needs to be running on a system, so if the computer is totally closed off from the outside world, administrators may feel it's not worth the hassle applying messy microcode, operating system, or application updates.

"Stopped" CPUs that won't therefore get a fix are in the Bloomfield, Bloomfield Xeon, Clarksfield, Gulftown, Harpertown Xeon C0 and E0, Jasper Forest, Penryn/QC, SoFIA 3GR, Wolfdale, Wolfdale Xeon, Yorkfield, and Yorkfield Xeon families. The list includes various Xeons, Core CPUs, Pentiums, Celerons, and Atoms – just about everything Intel makes.

Most [of] the CPUs listed above are oldies that went on sale between 2007 and 2011, so it is likely few remain in normal use.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by bradley13 on Wednesday April 04 2018, @01:39PM (6 children)

    by bradley13 (3053) on Wednesday April 04 2018, @01:39PM (#662469) Homepage Journal

    Moore's law actually says nothing at all about performance - it is about transistor density. For years, smaller transistors automatically meant faster performance. But CPUs have reached their maximum complexity, and there's really no use for more transistors. Transistor density is still going up nicely - but it's now being invested in things like putting multiple cores on the chip, or a GPU, or doing full SoC. None of these have anything to say about individual core performance.

    In an attempt to keep adding performance, Intel (and other CPU manufacturers) have been adding tricks like longer instruction pipelines and speculative execution. Which, on average, maybe do improve performance - but it turns out that they were cheating. Doing things that cannot be allowed, at least, not if we care about security.

    Future advances in computing hardware will likely be about things other than raw CPU speed. For example, currently, adding more than about 16 cores to a machine actually slows things down, because we don't know how to use them for general computing, but the operating system has to manage them anyway. If we can make better use of parallelism, we could use simpler cores and put thousands of them on a chip.. Or quantum computing, if it lives up to the dreams, may offer an entirely new computational paradigm.

    Meanwhile, we've reached the point where there's almost no need to replace a computer, unless it actually breaks. So we're seeing the marketing engines of all the hardware manufacturers working overtime. The most amusing example in recent history was the ad blitz, convincing you that you really need that new iPhone; with Apple getting caught intentionally degrading the performance of older models. Watch for more of the same on all fronts...

    --
    Everyone is somebody else's weirdo.
    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 3, Insightful) by requerdanos on Wednesday April 04 2018, @04:09PM (2 children)

    by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @04:09PM (#662532) Journal

    CPUs have reached their maximum complexity, and there's really no use for more transistors.

    Though we undoubtedly agree on the underlying facts, I respectfully disagree with your conclusion. CPUs haven't reached their maximum complexity--nowhere close--and we are at a stone-age level of transistor density compared to what's coming. Single cores will, at some point, be mind-bogglingly orders of magnitude faster than what we currently have.

    I know that the Cyberdine systems processor for the Terminator, the Starfleet bio-neural gel packs, and 2001's HAL are still science fiction, but they're all pretty plausible and yesterday's science fiction in many areas is today's science fact.

    Just because CPU complexity seems to have plateaued with the conditions of current design and manufacturing methods doesn't make the plateau a principle that applies to the future of computing. Making genuine advances that leap ahead of current designs is hard and expensive, whereas making clever recombinations of existing tech is less so; therefore, we do more of the latter than the former (and end up with spectre and meltdown). But the genuine advances that leap ahead are coming, barring the end of civilization through some unforseen means.

    Maybe we'll use optical communications on-die to eliminate heat loss and allow us to go from 3GHz to 30GHz or 300GHz with the same power draw. Maybe unmapped properties of as yet untried materials will be found to have the side effect of moving electrons with a tenth the effort that we now have to undertake. Maybe Fairies will sprinkle magic dust on our Fabs. I don't know what the advance will be. But I know it's coming, and I am not writing off higher transistor density, and I am not writing off greater CPU complexity just because today's best engineers don't know the path to make them effective.

    Though many people have mistakenly thought so all throughout the past, today is not as good as it's ever going to get (even if it's the best it's ever been) whether we are talking about chip designs, or steam engines, or tools, or the wheel, or fire.

    Progress has been, and will continue to be, making things better.

    • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @04:34PM

      by Anonymous Coward on Wednesday April 04 2018, @04:34PM (#662541)

      I know that the Cyberdine systems processor for the Terminator, the Starfleet bio-neural gel packs, and 2001's HAL are still science fiction, but they're all pretty plausible and yesterday's science fiction in many areas is today's science fact.

      To be pedantic, those might be evolving past our current notion of a CPU, in which case poster above would be correct

    • (Score: 0) by Anonymous Coward on Thursday April 05 2018, @09:08AM

      by Anonymous Coward on Thursday April 05 2018, @09:08AM (#662836)

      Technologically there's a long way to go.

      We don't have an AI as smart as a crow with a brain the size of walnut and a relatively low power consumption.

      Even some insects may still be smarter than our AIs in many ways: https://www.pbs.org/newshour/science/intelligence-test-shows-bees-can-learn-to-solve-tasks-from-other-bees [pbs.org]

      Most quadcopters and UAVs can't fly as long without refuelling/recharging as most flies. There are other insects which do even better - monarch butterfly (44 hours nonstop flapping, 1000+ hours if gliding allowed: https://www.learner.org/jnorth/tm/monarch/FlightPoweredEnergyQ.html [learner.org] ) or a dragonfly (Pantala flavescens).

  • (Score: 5, Interesting) by requerdanos on Wednesday April 04 2018, @04:24PM (2 children)

    by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @04:24PM (#662539) Journal

    to keep adding performance... CPU manufacturers have been adding tricks like longer instruction pipelines and speculative execution. Which... cannot be allowed... if we care about security.

    There's an old joke that goes "Doc, it hurts when I do this!" "Well, don't do that."

    I remember the 70s, 80s, and early 90s when the prevailing thoughts on security were something like this:

    • It was hard to do and made things harder to get working
    • It was applied only in cases where there was a demonstrated need
    • it was assumed to do things like reduce performance and capability
    • which was okay, because "security" and "performance/capability" were two different goals, and you could pick one.

    Basically, OS makers, whether it was the unixes or DOSes or CP/Ms or Windowses, and software programmers of all and sundry kinds, worked to make good, functional software that did the task at hand with a minimum of bother. If the user/admin wanted to encumber that with security measures, so be it, that was understandable if you had such a need.

    If a perfectly good, working, operating system or piece of software could be attacked and crashed, burned, or compromised, no one was really surprised or alarmed by that, in the same way that no one went around saying that "Bob the Builder is a crappy contractor--look! If you put lit matches on his buildings when accelerants are present, the buildings burn right to the ground! Why didn't he allow for that?"

    In other words, if someone attacked and knocked over a perfectly good stack of software, then the software wasn't at fault--the attacker was.

    Gradually, over time, the attackers grew in number and bad attitude, and our views evolved to the current have-our-cake-and-eat-it-too attitude of wanting things not only functional, fast, and easy, but also completely secure, and right now, and oh yes, for a bargain price if you please.

    Maybe we should retreat a little, and get some perspective. The cheating tricks that make processors faster are, in my opinion, a terrific thing even if they come at the cost of security. It's a trade-off, and I would be happy even if I had to pick between "faster" and "bulletproof-secure".

    • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @07:06PM (1 child)

      by Anonymous Coward on Wednesday April 04 2018, @07:06PM (#662589)

      The cheating tricks that make processors faster are, in my opinion, a terrific thing even if they come at the cost of security. It's a trade-off, and I would be happy even if I had to pick between "faster" and "bulletproof-secure".

      It's a valid opinion, but be aware that many who have to pick between those don't just decide for themselves, in contrast with earlier days. Pre-Internet, nobody cared much about remote exploitation, and rightfully so. Pre-cloud, nobody cared much about local data exfiltration. But nowadays, your health data could very well be stored on the same node where someone else's Nodejs code is running, or where noob users are browsing unprotected on a remote desktop. That's an entirely different threat model, and requires an equally different approach to computing.

      In time, I hope that if you were CIO of a company and deliberately chose the "faster" option, it may well make your company criminally liable for data leaks. We already have examples of such cases in different industries, why not here?

      • (Score: 1, Flamebait) by requerdanos on Wednesday April 04 2018, @07:59PM

        by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @07:59PM (#662614) Journal

        if you were CIO of a company and deliberately chose the "faster" option

        Depending on the application, this might or might not be the correct choice. As you mention (but forgot by the closing paragraph? Do you have some extreme A.D.D.?), it's a valid opinion, and connections to the public Internet influence which choice would be correct.

        Just because security is more and more commonly important vs. performance, that doesn't make it the right choice 100% of the time. Failing to recognize that and wishing jail time on someone based on the failure is pretty shortsighted.