A novel approach has found a way to take information leaked by recent Intel processors and use that to bypass Address Space Layout Randomization (ASLR). A story at Ars Technica, reports on research that allows attackers to bypass ASLR:
Researchers have devised a technique that bypasses a key security protection built into just about every operating system. If left unfixed, this could make malware attacks much more potent.
[...] Abu-Ghazaleh and two colleagues from the State University of New York at Binghamton demonstrated the technique on a computer running a recent version of Linux on top of a Haswell processor from Intel. By exploiting a flaw in the part of the CPU known as the branch predictor, a small application developed by the researchers was able to identify the memory locations where specific chunks of code spawned by other software would be loaded. In computer security parlance, the branch predictor contains a "side channel" that discloses the memory locations.
[...] A table in the predictor called the "branch target buffer" stores certain locations known as branch addresses. Modern CPUs rely on the branch predictor to speed up operations by anticipating the addresses where soon-to-be-executed instructions are located. They speculate whether a branch is taken or not and, if taken, what address it goes to. The buffers store addresses from previous branches to facilitate the prediction. The new technique exploits collisions in the branch target buffer table to figure out the addresses where specific code chunks are located.
[...] On Tuesday, the researchers presented the bypass at the IEEE/ACM International Symposium on Microarchitecture in Taipei, Taiwan. Their accompanying paper, titled "Jump Over ASLR: Attacking the Branch Predictor to Bypass ASLR [PDF]," proposes several hardware and software approaches for mitigating attacks.
It seems to me that any technique that conditionally provides improved execution speed can potentially become subject to a side-channel attack. If so, is the ultimate solution one where each instruction is restricted to running no faster than in its worse-case? Or that every instruction takes a fixed number of clock ticks? What about higher-level software routines that take different amounts of time dependent on their inputs? Is there a general solution to this class of side-channel leakage or are we stuck with a perpetual game of cat-and-mouse?
Also at: https://www.helpnetsecurity.com/2016/10/19/bypass-aslr-flaw-intel-chip/
(Score: 0) by Anonymous Coward on Friday October 21 2016, @05:46AM
But, known_return_address - current_return_address (the value at ESP+0 at the start of a call) already works to beat ASLR for injects, and once you know the relocation of the executable you can get the address of all its imports and from there the address of all relocated system files.
(Score: 3, Insightful) by FatPhil on Friday October 21 2016, @06:33AM
The concept of actually not writing shitty code somehow seems less important. People need to understand that they need tests that not just verify that code works, but also that it doesn't fail. How long have we had Valgrind? Coverity will even perform some pretty decent sanity checking to. There is no excuse for most of the exploits that are seen in code nowadays.
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 1, Insightful) by Anonymous Coward on Friday October 21 2016, @09:12AM
But finding bugs costs money! Better let the customers find them for you, while you use your resources to develop the next unnecessary^Wgreat feature!
(Score: 2) by gidds on Saturday October 22 2016, @05:25PM
Isn't that a straw-man argument? If ASLR was the sole security measure; if people expected it alone to protect against security issues; if it was supposed to be 100% secure — then of course that would be dumb.
But I don't think anyone has ever claimed that. It's just one component of many, part of a layered security ('security in depth') model. After all, it's possible that some attack might theoretically be developed against almost any security measure, but if you add many different layers of security, each protecting against different sorts of threats, then you end up vastly more secure than any single layer can be.
ASLR seemed, and still seems, a very worthwhile measure: if everything's compiled/linked/etc. the right way then AIUI it's entirely transparent, with no noticeable impact upon the user, performance, or anything else, and yet it makes a large class of attack much much more difficult.
This story is interesting, because it describes one possible way around it. But it doesn't seem remotely practical yet, so ASLR is still going to be a good idea for a long time to come.
To answer the poster's questions: I don't think you need to remove all the benefits of branch prediction or whatever to defeat this, just add a little randomisation from time to time. And the linked article describes several other approaches. No need to exaggerate rumours of ASLR's death :-)
[sig redacted]
(Score: 0) by Anonymous Coward on Monday October 24 2016, @12:37AM
I think computers have been around long enough that if everyone was going to stop writing shitty code, it would have happened by now. You need to accept we need things like ASLR to mitigate it.
You can also argue that we shouldn't add safety features to cars as everyone should take more care driving. But just like virtually nobody will driver perfectly all the time, virtually nobody will code perfectly all the time, so we really do need those safety features in our cars and in our OSes.
(Score: 3, Interesting) by FatPhil on Friday October 21 2016, @07:16AM
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 2) by TheRaven on Friday October 21 2016, @11:27AM
sudo mod me up
(Score: 2) by RamiK on Friday October 21 2016, @11:47AM
it's an attribute of any branch predictor
Won't happen on a Mill Machine: https://millcomputing.com/topic/aslr-security/ [millcomputing.com]
compiling...
(Score: 0) by Anonymous Coward on Friday October 21 2016, @11:42AM
I'm gonna predict where the next unchecked and bad code is gonna overrun!
(Score: 2) by tibman on Friday October 21 2016, @02:49PM
How about make instructions loaded into memory unreadable and unwritable by userland? As in the supervisor bit has to be lit and fed into a hardware AND operation along with chip-enable/chip-select just for the ram to become accessible. I recently started dabbling with low-level stuff and it seems like there are some solutions there. Overwriting data memory would still be possible but not program memory. Though I guess that becomes meaningless when you're running an interpreter from program memory that is executing bytecode from data memory. So C#, PHP, Java, and most "modern" languages would still be smashable. But C/C++ and anything that compiled into native assembly instructions should be safe.
It just seems that we are trying to fix a hardware problem with software.
SN won't survive on lurkers alone. Write comments.
(Score: 0) by Anonymous Coward on Friday October 21 2016, @11:49PM
I have had true oh virtual address space since before &086 came out. That first machine that I truly knew about this, was 8bit with 16bit address space but if you set a system flag in the directory entry your program was allowed to call a special system function to change to 23 bit real address space 24th bit was real=0 and virtual=1. Register 1 and 2 had privilege mode to save, load and point with 24 bits instead of 16 Even with privilege I could find my own real address. Let alone another job. 40 hrs later, we still have baby machine logic in what compared to 70's a super computer.