Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Wednesday August 15 2018, @02:02AM   Printer-friendly
from the [sigh] dept.

Intel's SGX blown wide open by, you guessed it, a speculative execution attack

Another day, another speculative execution-based attack. Data protected by Intel's SGX—data that's meant to be protected even from a malicious or hacked kernel—can be read by an attacker thanks to leaks enabled by speculative execution.

Since publication of the Spectre and Meltdown attacks in January this year, security researchers have been taking a close look at speculative execution and the implications it has for security. All high-speed processors today perform speculative execution: they assume certain things (a register will contain a particular value, a branch will go a particular way) and perform calculations on the basis of those assumptions. It's an important design feature of these chips that's essential to their performance, and it has been for 20 years.

[...] What's in store today? A new Meltdown-inspired attack on Intel's SGX, given the name Foreshadow by the researchers who found it. Two groups of researchers found the vulnerability independently: a team from KU Leuven in Belgium reported it to Intel in early January—just before Meltdown and Spectre went public—and a second team from the University of Michigan, University of Adelaide, and Technion reported it three weeks later.

SGX, standing for Software Guard eXtensions, is a new feature that Intel introduced with its Skylake processors that enables the creation of Trusted Execution Environments (TEEs). TEEs are secure environments where both the code and the data the code works with are protected to ensure their confidentiality (nothing else on the system can spy on them) and integrity (any tampering with the code or data can be detected). SGX is used to create what are called enclaves: secure blocks of memory containing code and data. The contents of an enclave are transparently encrypted every time they're written to RAM and decrypted on being read. The processor governs access to the enclave memory: any attempt to access the enclave's memory from outside the enclave should be blocked.

[...] As with many of the other speculative execution issues, a large part of the fix comes in the form of microcode updates, and in this case, the microcode updates are already released and in the wild and have been for some weeks. With the updated microcode, every time the processor leaves execution of an enclave, it also flushes the level 1 cache. With no data in level 1 cache, there's no scope for the L1TF to take effect. Similarly, with the new microcode leaving, management mode flushes the level 1 cache, protecting SMM data.

Also at Engadget and Wired.


Original Submission

Related Stories

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Insightful) by jmorris on Wednesday August 15 2018, @03:09AM (9 children)

    by jmorris (4844) on Wednesday August 15 2018, @03:09AM (#721659)

    Anyone remember other incidents where security updates for the same package became a regular occurrence to fix exploits? Sendmail? Flash Player, Windows SMB networking? Now it is X86 CPUs, always Intel and sometimes AMD as well getting the monthly big splashy bug. Seems to be quickly becoming clear you can have caching or you can have speculative execution but it is proving to be all but impossible to have both. And without both retiring more than one instruction per clock is going to become a fond memory. Clock speeds do not appear possible to ramp up enough to make up the loss and rededicating the silicon from speculative execution to more execution units is not a solution since few software developers can manage to keep the ones we have now busy, hence SpeedBoost.

    Now consider that the massive advantage in raw performance x86 enjoys over ARM is mostly due to higher IPC and that is largely due to better speculative execution which comes from having a crapload more transistors per core and the future is about to become hard to predict.

    We might have ran out into a dead end, but it is going to be very hard to give up the massive performance we have now, especially since we already "spent" the gains on a whole new generation of even shittier software developed by semi-literate monkeys copy/pasting from stackexchange. Old timers remember when we didn't think things could get worse than VB Code Monkeys flinging virtual poo into production systems... and then it did. Could we even reverse course if we wanted to at this late date?

    • (Score: 1, Interesting) by Anonymous Coward on Wednesday August 15 2018, @03:19AM

      by Anonymous Coward on Wednesday August 15 2018, @03:19AM (#721662)

      When I saw this pop up I predicted this exact thing. https://github.com/xoreaxeaxeax/sandsifter [github.com]

      Intel and AMD should have already been doing this years ago. Most of what I am seeing is the exact same mistakes the software guys have been working around for years. Buffer overflows and underuns.

      Basically most of them are 'run this battery of instructions' now bytes 1/2/3 in this other buffer outside of this range are now different. That should not happen. Basically the registers and buffers are not being cleared/reset correctly on context switches and branch predictions. To go back and audit this is going to be a huge mess. Because they have not been doing it *at* *all*.

    • (Score: 2) by MichaelDavidCrawford on Wednesday August 15 2018, @05:13AM (3 children)

      by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Wednesday August 15 2018, @05:13AM (#721685) Homepage Journal

      Multithreaded code may well be Rocket Science, but are we not Rocket Scientists?

      --
      Yes I Have No Bananas. [gofundme.com]
      • (Score: 3, Insightful) by takyon on Wednesday August 15 2018, @05:55AM

        by takyon (881) <takyonNO@SPAMsoylentnews.org> on Wednesday August 15 2018, @05:55AM (#721692) Journal

        Software developers are indeed a danger to society.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Wednesday August 15 2018, @07:37AM (1 child)

        by Anonymous Coward on Wednesday August 15 2018, @07:37AM (#721704)

        Multithreaded code may well be Rocket Science, but are we not Rocket Scientists?

        You should at least know what you are talking about. All these exploits have nothing to do with multithreading.

        • (Score: 0) by Anonymous Coward on Thursday August 16 2018, @12:07AM

          by Anonymous Coward on Thursday August 16 2018, @12:07AM (#721964)

          Indirectly it does: if the mitigation for these attacks is to disable speculative execution, that will mean worse performance. Greater parallelism (using more cores) could make up for that.

    • (Score: 2) by maxwell demon on Wednesday August 15 2018, @09:58AM

      by maxwell demon (1608) on Wednesday August 15 2018, @09:58AM (#721724) Journal

      Seems to be quickly becoming clear you can have caching or you can have speculative execution but it is proving to be all but impossible to have both.

      Maybe the processor should simply allow to toggle those optimizations. Then code that is not security critical (like games or scientific calculations) can run at full speed, while security-critical code will have reduced performance, but no processor-induced security issues.

      --
      The Tao of math: The numbers you can count are not the real numbers.
    • (Score: 2) by HiThere on Wednesday August 15 2018, @05:42PM (2 children)

      by HiThere (866) Subscriber Badge on Wednesday August 15 2018, @05:42PM (#721860) Journal

      I disagree that "Clock speeds do not appear possible to ramp up enough to make up the loss and rededicating the silicon from speculative execution to more execution units is not a solution since few software developers can manage to keep the ones we have now busy, hence SpeedBoost.". It requires a different approach to programming, but I can think of one relatively simple approach that would be easy with proper language support. (I'm talking about message passing, where the messages are immutable.) There are other approaches that are a bit more difficult, but with proper language support message passing is relatively trivial. You do need a different selection of algorithms to take maximal advantage of it, however.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
      • (Score: 2) by jmorris on Wednesday August 15 2018, @07:52PM (1 child)

        by jmorris (4844) on Wednesday August 15 2018, @07:52PM (#721904)

        People like you have been saying this for a couple of decades now. If it were easy you guys keep saying it would have happened by now. Parallel programming is hard and filled with subtle errors and security exploits.

        • (Score: 2) by HiThere on Thursday August 16 2018, @12:45AM

          by HiThere (866) Subscriber Badge on Thursday August 16 2018, @12:45AM (#721977) Journal

          Sure, if you pass mutable data. Not if you only pass immutable data. Unfortunately, C and C++ don't encourage safe parallel programming at all. Go is a lot better, but far from perfect. And too many approaches assume all the nodes will be running identical code. I haven't checked out Julia recently, but the last time I looked it didn't support message passing well, and was mainly useful for matrix manipulation. D doesn't allow you to pass "channels" between nodes. Etc. The language support to make it easy is missing, but the basic processes are reasonably easy. E.g., D has truly immutable values, Go has channels that operate the right way, etc. Perhaps the promised Ruby Guilds will do the job, but Ruby is not a fast language. Even if they get the promised tripling of the speed it won't be fast. But reasonably fast languages CAN do the job, and my preference for garbage collected languages doesn't imply that that's necessary for a good message passing language. (OTOH, do note that for this to work you *cannot* cast away immutability, and should not be able to. Otherwise this can lead to a huge amount of copying.)

          --
          Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 4, Insightful) by Arik on Wednesday August 15 2018, @05:00AM (1 child)

    by Arik (4543) on Wednesday August 15 2018, @05:00AM (#721681) Journal
    The problem is it's unfree, heck, positively opaque.

    Who knows how many other bugs are waiting to be discovered?
    --
    If laughter is the best medicine, who are the best doctors?
    • (Score: 2) by HiThere on Wednesday August 15 2018, @05:48PM

      by HiThere (866) Subscriber Badge on Wednesday August 15 2018, @05:48PM (#721863) Journal

      Try both. The microcode is buggy, and needs to be fixed, and the microcode is not going to be subject to "many eyes" because it's quite difficult to understand compared to assembler code. Open Source would help, because then a few more people would have an idea of just how trustworthy the code was, and they could help the less informed. And they could also comment about ways the manufacturer was "gaming the system", which, of course, is the main reason they aren't about to reveal the code.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
  • (Score: 1, Insightful) by Anonymous Coward on Wednesday August 15 2018, @08:05AM (6 children)

    by Anonymous Coward on Wednesday August 15 2018, @08:05AM (#721705)

    What got me thinking is how much energy is wasted through this speculative execution? The values do get calculated (costs energy), but discarded afterwards. To me this seems in terms of energy efficiency very poor efficiency. On a single system it might not be much (a few milli-/nanowatts maybe), but if one would count up all computers worldwide this could run into megawatts of wasted energy.

    • (Score: 2) by sce7mjm on Wednesday August 15 2018, @10:15AM (2 children)

      by sce7mjm (809) on Wednesday August 15 2018, @10:15AM (#721729)

      A bit like bitcoin?

      • (Score: 0) by Anonymous Coward on Wednesday August 15 2018, @10:24AM

        by Anonymous Coward on Wednesday August 15 2018, @10:24AM (#721730)

        More like a burning light on a place where nobody is.

      • (Score: 2) by coolgopher on Wednesday August 15 2018, @12:55PM

        by coolgopher (1157) on Wednesday August 15 2018, @12:55PM (#721753)

        Cue speculative blockchain implementation jokes...

    • (Score: 2) by HiThere on Wednesday August 15 2018, @05:51PM

      by HiThere (866) Subscriber Badge on Wednesday August 15 2018, @05:51PM (#721864) Journal

      It can't be *that* bad, because Intel chips have the reputation of running cooler than AMD chips. Or maybe that's the energy they save by not doing security checks properly.

      --
      Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
    • (Score: 0) by Anonymous Coward on Wednesday August 15 2018, @09:01PM (1 child)

      by Anonymous Coward on Wednesday August 15 2018, @09:01PM (#721918)

      It's not so simple. By completing the computation faster, you free the CPU earlier, and the system can low-power idle earlier.

      I saw this at the macro scale with a pretty stable load when tuning machines to a "hotter" configuration (faster multiplier, higher vcore) led to less wall power (the systems were completing work and idling for larger %ages, and peripherals powering down was a thing, this was spinning rust days). It's not hard to construct artificial scenarios at the microcode level where the same occurs.

      If you want a car analogy, it's better to drive 10km at 20kph than at 1kph, gas-wise, because idling for 10h is terrible and A/C and music and so on for 10h are not energy-free.

      • (Score: 0) by Anonymous Coward on Saturday August 18 2018, @11:54AM

        by Anonymous Coward on Saturday August 18 2018, @11:54AM (#723099)

        But also an idle computer is terrible waste. A lot of resources have been used to put the machine together, preferably it should run 100% for all its life!

        Definitely not simple.

(1)