Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Friday October 21 2016, @04:58AM   Printer-friendly
from the that-wasn't-the-plan dept.

A novel approach has found a way to take information leaked by recent Intel processors and use that to bypass Address Space Layout Randomization (ASLR). A story at Ars Technica, reports on research that allows attackers to bypass ASLR:

Researchers have devised a technique that bypasses a key security protection built into just about every operating system. If left unfixed, this could make malware attacks much more potent.

[...] Abu-Ghazaleh and two colleagues from the State University of New York at Binghamton demonstrated the technique on a computer running a recent version of Linux on top of a Haswell processor from Intel. By exploiting a flaw in the part of the CPU known as the branch predictor, a small application developed by the researchers was able to identify the memory locations where specific chunks of code spawned by other software would be loaded. In computer security parlance, the branch predictor contains a "side channel" that discloses the memory locations.

[...] A table in the predictor called the "branch target buffer" stores certain locations known as branch addresses. Modern CPUs rely on the branch predictor to speed up operations by anticipating the addresses where soon-to-be-executed instructions are located. They speculate whether a branch is taken or not and, if taken, what address it goes to. The buffers store addresses from previous branches to facilitate the prediction. The new technique exploits collisions in the branch target buffer table to figure out the addresses where specific code chunks are located.

[...] On Tuesday, the researchers presented the bypass at the IEEE/ACM International Symposium on Microarchitecture in Taipei, Taiwan. Their accompanying paper, titled "Jump Over ASLR: Attacking the Branch Predictor to Bypass ASLR [PDF]," proposes several hardware and software approaches for mitigating attacks.

It seems to me that any technique that conditionally provides improved execution speed can potentially become subject to a side-channel attack. If so, is the ultimate solution one where each instruction is restricted to running no faster than in its worse-case? Or that every instruction takes a fixed number of clock ticks? What about higher-level software routines that take different amounts of time dependent on their inputs? Is there a general solution to this class of side-channel leakage or are we stuck with a perpetual game of cat-and-mouse?

Also at: https://www.helpnetsecurity.com/2016/10/19/bypass-aslr-flaw-intel-chip/


Original Submission #1Original Submission #2

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday October 21 2016, @05:46AM

    by Anonymous Coward on Friday October 21 2016, @05:46AM (#417123)

    But, known_return_address - current_return_address (the value at ESP+0 at the start of a call) already works to beat ASLR for injects, and once you know the relocation of the executable you can get the address of all its imports and from there the address of all relocated system files.

  • (Score: 3, Insightful) by FatPhil on Friday October 21 2016, @06:33AM

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Friday October 21 2016, @06:33AM (#417131) Homepage
    It's saying "you're gonna cause buffer overruns in the shitty code that's running here, so I'm going to make that attack less likely to do what you want it to do (and more likely to do something random, yay)".

    The concept of actually not writing shitty code somehow seems less important. People need to understand that they need tests that not just verify that code works, but also that it doesn't fail. How long have we had Valgrind? Coverity will even perform some pretty decent sanity checking to. There is no excuse for most of the exploits that are seen in code nowadays.
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 1, Insightful) by Anonymous Coward on Friday October 21 2016, @09:12AM

      by Anonymous Coward on Friday October 21 2016, @09:12AM (#417166)

      But finding bugs costs money! Better let the customers find them for you, while you use your resources to develop the next unnecessary^Wgreat feature!

    • (Score: 2) by gidds on Saturday October 22 2016, @05:25PM

      by gidds (589) on Saturday October 22 2016, @05:25PM (#417611)

      Isn't that a straw-man argument?  If ASLR was the sole security measure; if people expected it alone to protect against security issues; if it was supposed to be 100% secure — then of course that would be dumb.

      But I don't think anyone has ever claimed that.  It's just one component of many, part of a layered security ('security in depth') model.  After all, it's possible that some attack might theoretically be developed against almost any security measure, but if you add many different layers of security, each protecting against different sorts of threats, then you end up vastly more secure than any single layer can be.

      ASLR seemed, and still seems, a very worthwhile measure: if everything's compiled/linked/etc. the right way then AIUI it's entirely transparent, with no noticeable impact upon the user, performance, or anything else, and yet it makes a large class of attack much much more difficult.

      This story is interesting, because it describes one possible way around it.  But it doesn't seem remotely practical yet, so ASLR is still going to be a good idea for a long time to come.

      To answer the poster's questions: I don't think you need to remove all the benefits of branch prediction or whatever to defeat this, just add a little randomisation from time to time.  And the linked article describes several other approaches.  No need to exaggerate rumours of ASLR's death :-)

      --
      [sig redacted]
    • (Score: 0) by Anonymous Coward on Monday October 24 2016, @12:37AM

      by Anonymous Coward on Monday October 24 2016, @12:37AM (#417992)

      I think computers have been around long enough that if everyone was going to stop writing shitty code, it would have happened by now. You need to accept we need things like ASLR to mitigate it.

      You can also argue that we shouldn't add safety features to cars as everyone should take more care driving. But just like virtually nobody will driver perfectly all the time, virtually nobody will code perfectly all the time, so we really do need those safety features in our cars and in our OSes.

  • (Score: 3, Interesting) by FatPhil on Friday October 21 2016, @07:16AM

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Friday October 21 2016, @07:16AM (#417142) Homepage
    He's crediting Aciicmez in 2007 for what what done by Bernstein in 2005. An apparent complete lack of mention of DJB seems bizarre, given how important his work was in the field. Perhaps they've met him and didn't get on! (Aciicmez, however, makes reference to the prior side-channel attacks by DJB correctly.)
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by TheRaven on Friday October 21 2016, @11:27AM

      by TheRaven (270) on Friday October 21 2016, @11:27AM (#417191) Journal
      This is why I gave up reading Ars when Hannibal left. He's also mischaracterising a side channel as a 'flaw'. This is not something specific to Haswell (that's just where the PoC was done), it's an attribute of any branch predictor: you can induce aliasing, you can use it to probe. It's no different from cache-related side channels: everything that you do to increase performance in a way that's invisible to the program is likely to be expose some of the detail that it's hiding via timing information.
      --
      sudo mod me up
  • (Score: 0) by Anonymous Coward on Friday October 21 2016, @11:42AM

    by Anonymous Coward on Friday October 21 2016, @11:42AM (#417193)

    I'm gonna predict where the next unchecked and bad code is gonna overrun!

  • (Score: 2) by tibman on Friday October 21 2016, @02:49PM

    by tibman (134) Subscriber Badge on Friday October 21 2016, @02:49PM (#417268)

    How about make instructions loaded into memory unreadable and unwritable by userland? As in the supervisor bit has to be lit and fed into a hardware AND operation along with chip-enable/chip-select just for the ram to become accessible. I recently started dabbling with low-level stuff and it seems like there are some solutions there. Overwriting data memory would still be possible but not program memory. Though I guess that becomes meaningless when you're running an interpreter from program memory that is executing bytecode from data memory. So C#, PHP, Java, and most "modern" languages would still be smashable. But C/C++ and anything that compiled into native assembly instructions should be safe.

    It just seems that we are trying to fix a hardware problem with software.

    --
    SN won't survive on lurkers alone. Write comments.
  • (Score: 0) by Anonymous Coward on Friday October 21 2016, @11:49PM

    by Anonymous Coward on Friday October 21 2016, @11:49PM (#417467)

    I have had true oh virtual address space since before &086 came out. That first machine that I truly knew about this, was 8bit with 16bit address space but if you set a system flag in the directory entry your program was allowed to call a special system function to change to 23 bit real address space 24th bit was real=0 and virtual=1. Register 1 and 2 had privilege mode to save, load and point with 24 bits instead of 16 Even with privilege I could find my own real address. Let alone another job. 40 hrs later, we still have baby machine logic in what compared to 70's a super computer.