Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Friday June 15 2018, @02:24AM   Printer-friendly
from the more-where-that-came-from dept.

The Intel® FPU speculation vulnerability has been confirmed. Theo guessed right last week.

Using information disclosed in Theo's talk, Colin Percival developed a proof-of-concept exploit in around 5 hours. This seems to have prompted an early end to an embargo (in which OpenBSD was not involved), and the official announcement of the vulnerability.

Also at The Register, Hot Hardware, and BetaNews.

An update to the article appearing in The Register adds:

A security flaw within Intel Core and Xeon processors can be potentially exploited to swipe sensitive data from the chips' math processing units.

Malware or malicious logged-in users can attempt to leverage this design blunder to steal the inputs and results of computations performed in private by other software.

These numbers, held in FPU registers, could potentially be used to discern parts of cryptographic keys being used to secure data in the system. For example, Intel's AES encryption and decryption instructions use FPU registers to hold keys.

In short, the security hole could be used to extract or guess at secret encryption keys within other programs, in certain circumstances, according to people familiar with the engineering mishap.

Modern versions of Linux – from kernel version 4.9, released in 2016, and later – and modern Windows, including Server 2016, as well as the latest spins of OpenBSD and DragonflyBSD are not affected by this flaw (CVE-2018-3665).

Windows Server 2008 is among the operating systems that will need to be patched, we understand, and fixes for affected Microsoft and non-Microsoft kernels are on their way. The Linux kernel team is back-porting mitigations to pre-4.9 kernels.

Essentially, hold tight, and wait for patches to land for your Intel-powered machines, if they are vulnerable. CVE-2018-3665 isn't the end of the world: malicious software has to be already running on your system to attempt to exploit it, and even then, it can only lift out crumbs at a time.

[...] Red Hat has more technical details, here. RHEL 5, 6, and 7, and Enterprise MRG 2 not running kernel-alt are vulnerable. In a statement to The Register, the Linux vendor clarified that this a potential task-to-task theft of information:


Original Submission

Related Stories

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by Subsentient on Friday June 15 2018, @03:02AM (4 children)

    by Subsentient (1111) on Friday June 15 2018, @03:02AM (#693311) Homepage Journal

    I'm getting so tired of all these crippling bugs in Intel's chips. Most of these patches have a performance hit. Depressing. IO speed has gone in the toilet for me since I enabled KPTI on vulnerable systems.

    --
    "It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
    • (Score: 1, Interesting) by Anonymous Coward on Friday June 15 2018, @03:51AM (2 children)

      by Anonymous Coward on Friday June 15 2018, @03:51AM (#693321)

      Most of the current issues would not BE issues if the original kernel/application memory address boundaries had been followed, instead of the performance optimization of placing kernel memory in a subset of application memory. However the added bounce buffers to pass data back and forth between kernel and user space were considered too great, and the choice was made to forego that protection and place everything in the single memory space that has persisted until today. Tanenbaum is being proven correct in his microkernel vs monolithic kernel debate, and we are finally reaching a point where the compromises in security taken for the sake of performance are being reassessed. Both processors and system memory are now fast enough for all but the most intensive workloads to not be significantly impacted by the loss in performance, and for applications that still require that level of performance, so long as they are not network accessable or scriptable then it is safe to use them with the protections disabled on a application by application basis, while everything else is properly isolated with kernel/application memory space isolation. The major exception to this is Web Browsers, which are going to take a huge performance hit, and things like WebGL, Javascript Coin Miners, and DRM will no longer have the access nor performance necessary to do their stated tasks.

      • (Score: 0) by Anonymous Coward on Friday June 15 2018, @05:32AM

        by Anonymous Coward on Friday June 15 2018, @05:32AM (#693344)

        Uh, no. Watch the talk. The working assumption that they started with, which allowed someone to write an exploit in less than one workday, is that the hardware is not bothering with permission checks until the decision about which tree to collapse is made.

      • (Score: 2) by shortscreen on Friday June 15 2018, @08:29AM

        by shortscreen (2252) on Friday June 15 2018, @08:29AM (#693385) Journal

        BIOS and DOS (including DPMI) system calls were done using interrupts. I'm not sure if DPMI hosts put their own code/data in the program's address space. Maybe your VMs would be safe from illicit memory reads as long as you only run DOS software...

    • (Score: 1, Insightful) by Anonymous Coward on Friday June 15 2018, @08:42AM

      by Anonymous Coward on Friday June 15 2018, @08:42AM (#693390)

      Well maybe Intel can think about ways of using all those billions of transistors to actually make their CPUs faster without so many security issues rather than just give us more and more cores.

      Maybe half the reason why Intel is faster than AMD is they cut more of these corners?

  • (Score: 2) by frojack on Friday June 15 2018, @03:05AM (6 children)

    by frojack (1554) on Friday June 15 2018, @03:05AM (#693313) Journal

    Your exploit program would have to know exactly when your victim was going to be using the fpu, (which for a great deal of work loads is rather uncommon), and time your attack to strike just after any such use.

    I suppose if you already had detail knowledge of the target work load, and freedom to schedule your attack precisely in time you might make this work a couple times out of any 10,000 attempts.

    Not something I'm worried about.

    --
    No, you are mistaken. I've always had this sig.
    • (Score: 4, Interesting) by jmorris on Friday June 15 2018, @03:40AM (5 children)

      by jmorris (4844) on Friday June 15 2018, @03:40AM (#693318)

      You forget that the "fpu" registers are now used for a lot more than floating point math.

      This is going to end with either the banning of all speculative execution or banning preemptive multitasking and limiting multi-tasking to what can be accomplished with multi-core / thread processors combined with a rebirth in cooperative multi-tasking, i.e. lots of yield calls. Either way a big change in the way we compute is coming because it is now becoming clear that any processor feature that even resembles speculative execution can be exploited to leak info. Major wrong turn in design brought on by the relentless quest for more IPC (Instructions Per Clock). So simplify the CPU core again and just let them proliferate? Only problem is that after a decade of pervasive multi-core processors deployed all the way down to phones, much software still fails to take advantage of more than one thread of execution.

      • (Score: 2) by takyon on Friday June 15 2018, @04:14AM (3 children)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday June 15 2018, @04:14AM (#693327) Journal

        Only problem is that after a decade of pervasive multi-core processors deployed all the way down to phones, much software still fails to take advantage of more than one thread of execution.

        Fails to take advantage, or simply doesn't need to?

        It looks like the premier HEDT chips from Intel and AMD are going to 22 and 32 cores respectively (see next story). 6 to 8 cores will occupy the mainstream spot that 4 cores once held.

        Most people are running a lot of software simultaneously (some of it in the background without their knowledge), so there is an obvious use for more cores/threads. And there has been embarrassingly parallel tasks such as graphics, transcoding, and nowadays machine learning (a focus of smartphones from the last year).

        Software that can use as many threads as possible will benefit greatly. Software that doesn't need more than a single thread can stay that way. Software that can benefit from multithreading will probably get optimized only if it truly needs those resources to be responsive.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by jmorris on Friday June 15 2018, @04:28AM (1 child)

          by jmorris (4844) on Friday June 15 2018, @04:28AM (#693328)

          Lets get right to the blunt truth. If multi-threaded applications were easy and popular CPU makers wouldn't depend on the "boost" speed so much since that is the speed a single core can run at if the others are idled, i.e. for single threaded use. Most of those "embarrassingly parallel" tasks you cite either already run off of the main CPU or soon will. So yes they are pushing out ever more cores, because what else can they do? The MHz war has been over for a very long time, we crossed the 1GHz barrier at the turn of the Century and aren't likely to ever see 10GHz; outside of water cooled oddities 5GHz eludes us. Since some software CAN utilize the extra cores they keep cranking out new product and the gravy train keeps rolling. For now.

          • (Score: 3, Informative) by takyon on Friday June 15 2018, @05:48AM

            by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Friday June 15 2018, @05:48AM (#693349) Journal

            CPU makers wouldn't depend on the "boost" speed so much since that is the speed a single core can run at if the others are idled, i.e. for single threaded use.

            Exclusively single-threaded tasks will always exist, and a lot of home computers are idling (effectively using zero cores). Just as decreasing power consumption while idling makes sense for users, allowing boost/turbo clocks when cores are inactive also makes sense. The CPU maker isn't "depending" on it, and you don't lose anything by having it available.

            If you are running BOINC or something, then you may never see your CPU idling or turbo-ing. And that's OK.

            Obviously, many people don't need 32 cores, or even 4 cores. But it would be nice if they could use it if they had it. Maybe it is a chicken and egg problem that will slowly become solved now that 6+ cores are becoming more widely available.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by shortscreen on Friday June 15 2018, @08:21AM

          by shortscreen (2252) on Friday June 15 2018, @08:21AM (#693382) Journal

          There is multi-threaded software, and there is single-threaded software that runs fine as is.

          Then there is all of the single-threaded code that performs badly. In theory, I bet most of it could be made faster with more cores. Unfortunately that would require coders with motivation and ability. If they had that... we might be back to not needing to use more cores.

          I guess what I'm saying is, developers mostly suck, and upgrading the hardware to make up for their shortcomings has always resulted in them stepping up their game and producing even worse code.

          (I've been playing a game recently. It's choppy, sluggish, and crashes regularly. Only uses one core. It was written in Python.)

      • (Score: 0) by Anonymous Coward on Friday June 15 2018, @08:51AM

        by Anonymous Coward on Friday June 15 2018, @08:51AM (#693393)

        it is now becoming clear that any processor feature that even resembles speculative execution can be exploited to leak info.

        But is that really the case? Can't you speculatively execute while respecting security restrictions? AMD CPUs were not as vulnerable as Intel.

        There would be corner cases where the security restrictions change during the speculative execution but I think in most cases we can assume that an app can't do stuff like access kernel memory directly.

        They've got billions of transistors to use. They were running out of ideas on how to use them so they started giving us more cores instead. So maybe they can start coming up with ways to use some of those transistors for this.

(1)