Stories
Slash Boxes
Comments

SoylentNews is people

posted by chromas on Wednesday March 06 2019, @06:40AM   Printer-friendly
from the Intel-illness dept.

Submitted via IRC for Bytram & AzumaHazuki

SPOILER alert, literally: Intel CPUs afflicted with simple data-spewing spec-exec vulnerability

Speculative execution, the practice of allowing processors to perform future work that may or may not be needed while they await the completion of other computations, is what enabled the Spectre vulnerabilities revealed early last year.

In a research paper distributed this month through pre-print service ArXiv, "SPOILER: Speculative Load Hazards Boost Rowhammer and Cache Attacks," computer scientists at Worcester Polytechnic Institute in the US, and the University of Lübeck in Germany, describe a new way to abuse the performance boost.

The researchers [...] have found that "a weakness in the address speculation of Intel's proprietary implementation of the memory subsystem" reveals memory layout data, making other attacks like Rowhammer much easier to carry out.

The researchers also examined Arm and AMD processor cores, but found they did not exhibit similar behavior.

"We have discovered a novel microarchitectural leakage which reveals critical information about physical page mappings to user space processes," the researchers explain.

"The leakage can be exploited by a limited set of instructions, which is visible in all Intel generations starting from the 1st generation of Intel Core processors, independent of the OS and also works from within virtual machines and sandboxed environments."

The issue is separate from the Spectre vulnerabilities, and is not addressed by existing mitigations. It can be exploited from user space without elevated privileges.

[...] "The root cause of the issue is that the memory operations execute speculatively and the processor resolves the dependency when the full physical address bits are available," said Moghimi. "Physical address bits are security sensitive information and if they are available to user space, it elevates the user to perform other micro architectural attacks."

[...] SPOILER, the researchers say, will make existing Rowhammer and cache attacks easier, and make JavaScript-enabled attacks more feasible – instead of taking weeks, Rowhammer could take just seconds. Moghimi said the paper describes a JavaScript-based cache prime+probe technique that can be triggered with a click to leak private data and cryptographic keys not protected from cache timing attacks.

Mitigations may prove hard to come by. "There is no software mitigation that can completely erase this problem," the researchers say. Chip architecture fixes may work, they add, but at the cost of performance.


Original Submission #1Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday March 06 2019, @07:25AM (9 children)

    by Anonymous Coward on Wednesday March 06 2019, @07:25AM (#810621)

    And it will not hurt performance but increase it: Don't run JavaScript.

  • (Score: 2) by bradley13 on Wednesday March 06 2019, @07:33AM (7 children)

    by bradley13 (3053) on Wednesday March 06 2019, @07:33AM (#810622) Homepage Journal

    "Don't run JavaScript."

    That's not really the point. Sure, maybe Javascript can be used as an attack vector, but so can a zillion other things. Speculative execution was always a dumb idea. Sure, it gains a couple of percent performance on certain tasks, but this in return for a massive increase in complexity. Avoiding that complexity would have reduced die sizes and costs, likely increased reliability, and it would have avoided these kinds of security issues.

    Intel will be playing games to avoid the consequences, but they have doomed their processors. No one with serious security concerns (read: most governments) will stay with Intel for the long-term.

    --
    Everyone is somebody else's weirdo.
    • (Score: 1, Insightful) by Anonymous Coward on Wednesday March 06 2019, @08:54AM

      by Anonymous Coward on Wednesday March 06 2019, @08:54AM (#810639)

      actually disabling javascript is a pretty good solution. This attack is a threat if the attacker can run arbitrary code on your CPU. That happens on rented virtual machines (heh, the cloud) or in the web browser (via javascript) or if you are foolish enough to run apps on your phone. In most other cases you trust the supplier of the code already ...

    • (Score: 1) by shrewdsheep on Wednesday March 06 2019, @10:48AM (4 children)

      by shrewdsheep (5215) on Wednesday March 06 2019, @10:48AM (#810656)

      I have always wondered how much is gained by speculative execution but have never seen any studies about it except for the numbers of the manufacturers (they much exist, maybe someone can point them out). IRRC, the Power architecture used to have (still has?) a single bit in the instruction that indicates the more likely branching direction as determined by the compiler. This leads to speculative execution of a single branch only and a possible roll-back. Most loops are very efficiently covered by this mechanism. Only if-statements are probably difficult to handle for the compiler without profiling. Speculative execution always seemed to me to be a hardware hack to make software faster when the same benefits could have been achieved through better compilers/language designs.

      • (Score: 2) by c0lo on Wednesday March 06 2019, @12:14PM (3 children)

        by c0lo (156) Subscriber Badge on Wednesday March 06 2019, @12:14PM (#810671) Journal

        Speculative execution always seemed to me to be a hardware hack to make software faster when the same benefits could have been achieved through better compilers/language designs.

        Ummmmhhh... no matter how good the compiler/language design is, there could be some hundred cpu cycles to get that data you need from your ram into the cpu caches.
        So, you either don't do anything until the data arrives or you take 50% chances to go ahead on some branch and hope that the decision you took is the correct one.

        --
        https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
        • (Score: 2, Interesting) by shrewdsheep on Wednesday March 06 2019, @12:59PM (2 children)

          by shrewdsheep (5215) on Wednesday March 06 2019, @12:59PM (#810685)

          In practice, the chance would be much lower in most cases. For a loop, the roll-back would be required in 1/N cases, where N is the iteration count. If you add profiler information, the roll-back chance can also be reduced for if-statements. The branch bit can even be patched into the binary and online optimization is therefore possible. I won't argue that there are cases where the mis-prediction is closer to 50% but intuitively that seems unlikely to impact well designed/profiled algorithms. Branch predictions just seems to be an inferior version of proper profiling. Then I again, I would gladly be convinced otherwise.

          • (Score: 3, Insightful) by bzipitidoo on Wednesday March 06 2019, @02:51PM

            by bzipitidoo (4388) on Wednesday March 06 2019, @02:51PM (#810706) Journal

            The speculation occurs on both branches-- the CPU has the parallelism to try both ways. Eventually, whichever branch turns out to be correct is kept, and the computations for the wrong one are thrown away. Less or no branch prediction needed in that setup.

            Trying both sides of a branch certainly uses more energy, but speed is king. More important than a little energy or slightly stronger security. The fact is, unlike the infamous FDIV bug, not everyone is howling at chip designers to fix Spectre. They evidently feel that security from Spectre is less important than performance. For one thing, it's hard to see any way that Spectre could cause ordinary code to produce incorrect results. Stumbling into an innocent problem for which the root cause is Spectre is extremely unlikely to happen, much rarer than FDIV. Code has to be carefully designed to exploit these flaws. The design involves actions that would make no sense for an honest program to do, stuff like repeating the same actions and calculations over and over,

          • (Score: 3, Interesting) by c0lo on Wednesday March 06 2019, @10:51PM

            by c0lo (156) Subscriber Badge on Wednesday March 06 2019, @10:51PM (#810904) Journal

            Well, the exact value of chance has little impact on the reason the speculative exec exists in the first place: the wait time to get the data from RAM into the CPU. And that problem is something that no language/compiler can address on its own.

            (the above being in response to the " the same benefits could have been achieved through better compilers/language designs.")

            --
            https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 3, Interesting) by jmorris on Wednesday March 06 2019, @07:20PM

      by jmorris (4844) on Wednesday March 06 2019, @07:20PM (#810828)

      Speculative execution is what allowed Instructions Per Cycle to be greater than one, on the best current tech IPC is now around four. And also, as another commenter has already noted, speculative execution allows the CPU clock to greatly exceed the speed of the memory subsystem, without it the cpu would simply stall very quickly. So no, simply eliminating it is not a matter of a few percent loss in performance. If the simply issued a microcode update tomorrow to entirely disable it that 4GHz machine would suddenly be operating well under 1GHz comparable performance. The entire modern computer design would be called into question.

      And yet nothing less is going to solve this problem. We are approaching a breaking point, something is going to have to yield.

  • (Score: 1, Interesting) by Anonymous Coward on Wednesday March 06 2019, @08:52AM

    by Anonymous Coward on Wednesday March 06 2019, @08:52AM (#810637)

    There is also one more protection: Don't write crappy software so it can run on a slower hardware.
    Is it hard to abandon resource-hungry frameworks?
    Or optimize algorithm after it's developed?
    Using definitions for debugging to make release a release, not a console "cat-on-a-keyboard" text generator?
    Or maybe not using the two weeks old computer a rich boss bought to devs, but testing software on an average or even a bit below average machine?
    Finally, not being afraid of assembly optimization. This boosts performance in low-level C code beautifully (e.g. in OS) and it repays. The problem is that we are not teaching programmers, but libraries users. People use programming languages without knowledge how the computer works. In "Applied Computer Science" course in quite good European university, there is only one semester-long course of assembler... and it's x86 assembler :(. The "Elementary CS" course starts with C++. When students have "Control systems" course, they get intentionally weak devices, with 1K of ram for example. It's fun watching them trying to cram a whole C++ objects there because using registers, state machine approach or global storage... has been forbidden in courses when they were using a high-end computers.