Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Friday October 21 2016, @04:58AM   Printer-friendly
from the that-wasn't-the-plan dept.

A novel approach has found a way to take information leaked by recent Intel processors and use that to bypass Address Space Layout Randomization (ASLR). A story at Ars Technica, reports on research that allows attackers to bypass ASLR:

Researchers have devised a technique that bypasses a key security protection built into just about every operating system. If left unfixed, this could make malware attacks much more potent.

[...] Abu-Ghazaleh and two colleagues from the State University of New York at Binghamton demonstrated the technique on a computer running a recent version of Linux on top of a Haswell processor from Intel. By exploiting a flaw in the part of the CPU known as the branch predictor, a small application developed by the researchers was able to identify the memory locations where specific chunks of code spawned by other software would be loaded. In computer security parlance, the branch predictor contains a "side channel" that discloses the memory locations.

[...] A table in the predictor called the "branch target buffer" stores certain locations known as branch addresses. Modern CPUs rely on the branch predictor to speed up operations by anticipating the addresses where soon-to-be-executed instructions are located. They speculate whether a branch is taken or not and, if taken, what address it goes to. The buffers store addresses from previous branches to facilitate the prediction. The new technique exploits collisions in the branch target buffer table to figure out the addresses where specific code chunks are located.

[...] On Tuesday, the researchers presented the bypass at the IEEE/ACM International Symposium on Microarchitecture in Taipei, Taiwan. Their accompanying paper, titled "Jump Over ASLR: Attacking the Branch Predictor to Bypass ASLR [PDF]," proposes several hardware and software approaches for mitigating attacks.

It seems to me that any technique that conditionally provides improved execution speed can potentially become subject to a side-channel attack. If so, is the ultimate solution one where each instruction is restricted to running no faster than in its worse-case? Or that every instruction takes a fixed number of clock ticks? What about higher-level software routines that take different amounts of time dependent on their inputs? Is there a general solution to this class of side-channel leakage or are we stuck with a perpetual game of cat-and-mouse?

Also at: https://www.helpnetsecurity.com/2016/10/19/bypass-aslr-flaw-intel-chip/


Original Submission #1Original Submission #2

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by FatPhil on Friday October 21 2016, @06:33AM

    by FatPhil (863) <{pc-soylent} {at} {asdf.fi}> on Friday October 21 2016, @06:33AM (#417131) Homepage
    It's saying "you're gonna cause buffer overruns in the shitty code that's running here, so I'm going to make that attack less likely to do what you want it to do (and more likely to do something random, yay)".

    The concept of actually not writing shitty code somehow seems less important. People need to understand that they need tests that not just verify that code works, but also that it doesn't fail. How long have we had Valgrind? Coverity will even perform some pretty decent sanity checking to. There is no excuse for most of the exploits that are seen in code nowadays.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1, Insightful) by Anonymous Coward on Friday October 21 2016, @09:12AM

    by Anonymous Coward on Friday October 21 2016, @09:12AM (#417166)

    But finding bugs costs money! Better let the customers find them for you, while you use your resources to develop the next unnecessary^Wgreat feature!

  • (Score: 2) by gidds on Saturday October 22 2016, @05:25PM

    by gidds (589) on Saturday October 22 2016, @05:25PM (#417611)

    Isn't that a straw-man argument?  If ASLR was the sole security measure; if people expected it alone to protect against security issues; if it was supposed to be 100% secure — then of course that would be dumb.

    But I don't think anyone has ever claimed that.  It's just one component of many, part of a layered security ('security in depth') model.  After all, it's possible that some attack might theoretically be developed against almost any security measure, but if you add many different layers of security, each protecting against different sorts of threats, then you end up vastly more secure than any single layer can be.

    ASLR seemed, and still seems, a very worthwhile measure: if everything's compiled/linked/etc. the right way then AIUI it's entirely transparent, with no noticeable impact upon the user, performance, or anything else, and yet it makes a large class of attack much much more difficult.

    This story is interesting, because it describes one possible way around it.  But it doesn't seem remotely practical yet, so ASLR is still going to be a good idea for a long time to come.

    To answer the poster's questions: I don't think you need to remove all the benefits of branch prediction or whatever to defeat this, just add a little randomisation from time to time.  And the linked article describes several other approaches.  No need to exaggerate rumours of ASLR's death :-)

    --
    [sig redacted]
  • (Score: 0) by Anonymous Coward on Monday October 24 2016, @12:37AM

    by Anonymous Coward on Monday October 24 2016, @12:37AM (#417992)

    I think computers have been around long enough that if everyone was going to stop writing shitty code, it would have happened by now. You need to accept we need things like ASLR to mitigate it.

    You can also argue that we shouldn't add safety features to cars as everyone should take more care driving. But just like virtually nobody will driver perfectly all the time, virtually nobody will code perfectly all the time, so we really do need those safety features in our cars and in our OSes.