Slash Boxes

SoylentNews is people

posted by martyb on Saturday February 16 2019, @02:08PM   Printer-friendly [Skip to comment(s)]
from the so-that-means...-we-are-screwed dept.

Related Stories

Patch for Intel Speculative Execution Vulnerability Could Reduce Performance by 5 to 35% [Update: 2] 103 comments

UPDATE 2: (martyb)

This still-developing story is full of twists and turns. It seems that Intel chips are definitely implicated (AFAICT anything post Pentium Pro). There have been various reports, and denials, that AMD and ARM are also affected. There are actually two vulnerabilities being addressed. Reports are that a local user can access arbitrary kernel memory and that, separately, a process in a VM can access contents of other virtual machines on a host system. These discoveries were embargoed for release until January 9th, but were pre-empted when The Register first leaked news of the issues.

At this time, manufacturers are scrambling to make statements on their products' susceptibility. Expect a slew of releases of urgent security fixes for a variety of OSs, as well as mandatory reboots of VMs on cloud services such as Azure and AWS. Implications are that there is going to be a performance hit on most systems, which may have cascading follow-on effects for performance-dependent activities like DB servers.

To get started, see the very readable and clearly-written article at Ars Technica: What’s behind the Intel design flaw forcing numerous patches?.

Google Security Blog: Today's CPU vulnerability: what you need to know.
Google Project Zero: Reading privileged memory with a side-channel, which goes into detail as to what problems are being addressed as well as including CVEs:

Qualcomm Joins Others in Confirming its CPUs Suffer From Spectre, and Other Meltdown News 31 comments

Arthur T Knackerbracket has found the following story:

Qualcomm has confirmed its processors have the same security vulnerabilities disclosed this week in Intel, Arm and AMD CPU cores this week.

The California tech giant picked the favored Friday US West Coast afternoon "news dump" slot to admit at least some of its billions of Arm-compatible Snapdragon system-on-chips and newly released Centriq server-grade processors are subject to the Meltdown and/or Spectre data-theft bugs.

[...] Qualcomm declined to comment further on precisely which of the three CVE-listed vulnerabilities its chips were subject to, or give any details on which of its CPU models may be vulnerable. The paper describing the Spectre data-snooping attacks mentions that Qualcomm's CPUs are affected, while the Meltdown paper doesn't conclude either way.

[...] Apple, which too bases its iOS A-series processors on Arm's instruction set, said earlier this week that its mobile CPUs were vulnerable to Spectre and Meltdown – patches are available or incoming for iOS. The iGiant's Intel-based Macs also need the latest macOS, version 10.13.2 or greater, to kill off Meltdown attacks.

Congress Questions Chipmakers About Meltdown and Spectre 54 comments

Vox Media website reports that Rep. Jerry McNerney (D-CA) wants answers about the recent computer chip chaos.

Congress is starting to ask hard questions about the fallout from the Meltdown and Spectre vulnerabilities. Today, Rep. Jerry McNerney (D-CA) sent a letter [(pdf)] requesting a briefing from Intel, AMD, and ARM about the vulnerabilities’ impact on consumers.

[...] The two vulnerabilities are “glaring warning signs that we must take cybersecurity more seriously,” McNerney argues in the letter. “Should the vulnerabilities be exploited, the effects on consumers’ privacy and our nation’s economy and security would be absolutely devastating.”

Privately disclosed to chipmakers in June of 2016, the Meltdown and Spectre bugs became public after a haphazard series of leaks earlier this month. In the aftermath, there have been significant patching problems, including an AMD patch that briefly prevented Windows computers from booting up. Intel in particular has come under fire for inconsistent statements about the impact of the bugs, and currently faces a string of proposed class-action lawsuits relating to the bugs.

Meltdown can be fixed through a relatively straightforward operating-system level patch, but Spectre has proven more difficult, and there have been significant patching problems in the aftermath. The most promising news has been Google’s Retpoline approach, which the company says can protect against the trickiest Spectre variant with little negative performance impact.

The letter calls on the CEOs of Intel, AMD, and ARM to answer (among other things) when they learned about these problems and what they are doing about it.

Original Submission

What Impact Has Meltdown/Spectre Had on YOUR Systems? 47 comments

SoylentNews first reported the vulnerabilities on January 3. Since then, we have had a few stories addressing different reports about these vulnerabilities. Now that it is over two weeks later and we are *still* dealing with reboots, I am curious as to what our community's experience has been.

What steps have you taken, if any, to deal with these reports? Be utterly proactive and install every next thing that comes along? Do a constrained roll out to test a system or two before pushing out to other systems? Wait for the dust to settle before taking any steps?

What providers (system/os/motherboard/chip) have been especially helpful... or non-helpful? How has their response affected your view of that company?

What resources have you been using to check on the status of fixes for your systems? Have you found a site that stands above the others in timeliness and accuracy?

How has this affected your purchasing plans... and your expectations on what you could get for selling your old system? Are you now holding off on purchasing something new?

Original Submission

Intel Admits a Load of its CPUs Have Spectre V2 Flaw That Can't be Fixed 40 comments

It seems Intel has had some second thoughts about Spectre 2 microcode fixes:

Intel has issued new a new "microcode revision guidance" that confesses it won't address the Meltdown and Spectre design flaws in all of its vulnerable processors – in some cases because it's too tricky to remove the Spectre v2 class of vulnerabilities.

The new guidance (pdf), issued April 2, adds a "stopped" status to Intel's "production status" category in its array of available Meltdown and Spectre security updates. "Stopped" indicates there will be no microcode patch to kill off Meltdown and Spectre.

The guidance explains that a chipset earns "stopped" status because, "after a comprehensive investigation of the microarchitectures and microcode capabilities for these products, Intel has determined to not release microcode updates for these products for one or more reasons."

Those reasons are given as:

  • Micro-architectural characteristics that preclude a practical implementation of features mitigating [Spectre] Variant 2 (CVE-2017-5715)
  • Limited Commercially Available System Software support
  • Based on customer inputs, most of these products are implemented as "closed systems" and therefore are expected to have a lower likelihood of exposure to these vulnerabilities.

Thus, if a chip family falls under one of those categories – such as Intel can't easily fix Spectre v2 in the design, or customers don't think the hardware will be exploited – it gets a "stopped" sticker. To leverage the vulnerabilities, malware needs to be running on a system, so if the computer is totally closed off from the outside world, administrators may feel it's not worth the hassle applying messy microcode, operating system, or application updates.

"Stopped" CPUs that won't therefore get a fix are in the Bloomfield, Bloomfield Xeon, Clarksfield, Gulftown, Harpertown Xeon C0 and E0, Jasper Forest, Penryn/QC, SoFIA 3GR, Wolfdale, Wolfdale Xeon, Yorkfield, and Yorkfield Xeon families. The list includes various Xeons, Core CPUs, Pentiums, Celerons, and Atoms – just about everything Intel makes.

Most [of] the CPUs listed above are oldies that went on sale between 2007 and 2011, so it is likely few remain in normal use.

Original Submission

Intel FPU Speculation Vulnerability Confirmed 12 comments

The Intel® FPU speculation vulnerability has been confirmed. Theo guessed right last week.

Using information disclosed in Theo's talk, Colin Percival developed a proof-of-concept exploit in around 5 hours. This seems to have prompted an early end to an embargo (in which OpenBSD was not involved), and the official announcement of the vulnerability.

Also at The Register, Hot Hardware, and BetaNews.

An update to the article appearing in The Register adds:

A security flaw within Intel Core and Xeon processors can be potentially exploited to swipe sensitive data from the chips' math processing units.

Malware or malicious logged-in users can attempt to leverage this design blunder to steal the inputs and results of computations performed in private by other software.

These numbers, held in FPU registers, could potentially be used to discern parts of cryptographic keys being used to secure data in the system. For example, Intel's AES encryption and decryption instructions use FPU registers to hold keys.

In short, the security hole could be used to extract or guess at secret encryption keys within other programs, in certain circumstances, according to people familiar with the engineering mishap.

Modern versions of Linux – from kernel version 4.9, released in 2016, and later – and modern Windows, including Server 2016, as well as the latest spins of OpenBSD and DragonflyBSD are not affected by this flaw (CVE-2018-3665).

Windows Server 2008 is among the operating systems that will need to be patched, we understand, and fixes for affected Microsoft and non-Microsoft kernels are on their way. The Linux kernel team is back-porting mitigations to pre-4.9 kernels.

Essentially, hold tight, and wait for patches to land for your Intel-powered machines, if they are vulnerable. CVE-2018-3665 isn't the end of the world: malicious software has to be already running on your system to attempt to exploit it, and even then, it can only lift out crumbs at a time.

[...] Red Hat has more technical details, here. RHEL 5, 6, and 7, and Enterprise MRG 2 not running kernel-alt are vulnerable. In a statement to The Register, the Linux vendor clarified that this a potential task-to-task theft of information:

Original Submission

New Spectre Variant SpectreRSB Targets Return Stack Buffer 5 comments

Threatpost reports:

[A] new Spectre-class exploit, dubbed SpectreRSB, was detailed by researchers from the University of California at Riverside in a research paper [PDF] on Friday. While the flaw still targets the process of speculative execution, unlike other variants, it manipulates a new part of the process called the return stack buffer.

[...] RSB is a common "predictor structure" in CPUs used to predict return addresses during the speculative execution process. It does so by pushing the return address from a call instruction on an internal hardware stack [...]

Since the disclosure of Spectre in January, various variants have consequently been disclosed by researchers – however, these have all targeted the branch predictor unit or cache within the CPU.

[...] Researchers said they have reported SpectreRSB to Intel, AMD and ARM [...]

The Register (CloudFlare-protected) also has an article about SpectreRSB.

"The microarchitecture of Intel, AMD and VIA CPUs" (PDF) by Agner Fog (cited by Wikipedia) has further explanation of what a return stack buffer is:

A Last-In-First-Out buffer, called the return stack buffer, remembers the return address every time a call instruction is executed, and it uses this for predicting where the corresponding return will go. This mechanism makes sure that return instructions are correctly predicted when the same subroutine is called from several different locations. The P1 has no return stack buffer, but uses the same method for returns as for indirect jumps. Later processors have a return stack buffer. [...]

Original Submission

Intel Discloses a Speculative Execution Attack in Software Guard eXtensions (SGX) 19 comments

Intel's SGX blown wide open by, you guessed it, a speculative execution attack

Another day, another speculative execution-based attack. Data protected by Intel's SGX—data that's meant to be protected even from a malicious or hacked kernel—can be read by an attacker thanks to leaks enabled by speculative execution.

Since publication of the Spectre and Meltdown attacks in January this year, security researchers have been taking a close look at speculative execution and the implications it has for security. All high-speed processors today perform speculative execution: they assume certain things (a register will contain a particular value, a branch will go a particular way) and perform calculations on the basis of those assumptions. It's an important design feature of these chips that's essential to their performance, and it has been for 20 years.

Intel 'Gags' Linux Distros From Revealing Performance Hit From Spectre Patches 37 comments


Open-source champion Bruce Perens has called out Intel for adding a new restriction to its software license agreement along with its latest CPU security patches to prevent developers from publishing software benchmark results.

The new clause appears to be a move by Intel to legally gag developers from revealing performance degradation caused by its mitigations for Spectre and Foreshadow or 'L1 Terminal Fault' (L1FT) flaw speculative attacks.

"You will not, and will not allow any third party to ... publish or provide any software benchmark or comparison test results," Intel's new agreement states .

[...] Another section of the license blocking redistribution appears to have caused maintainers of Debian to withhold Intel's patch too , as reported by The Register.

[...] Updated 12:15pm ET, August 23 2018: An Intel spokesperson responded: "We are updating the license now to address this and will have a new version available soon. As an active member of the open-source community, we continue to welcome all feedback."

Original Submission

MIT Researchers Claim to Have a Solution for Some Speculative Execution Attacks 9 comments

Researchers Claim to Find New Solution to Spectre, Meltdown

The researchers call their solution Dynamically Allocated Way Guard (DAWG) and revealed it in a recent paper. This name stands in opposition to Intel's Cache Allocation Technology (CAT) and is said to prevent attackers from accessing ostensibly secure information through exploiting flaws in the speculative execution process. Best of all, DAWG is said to require very few resources that CAT isn't already using and can be enabled with operating system changes instead of requiring the in-silicon fixes many thought were needed to address the flaws.

[...] Here's how the researchers summarized their approach with DAWG:

"Unlike existing mechanisms such as CAT, DAWG disallows hits across protection domains. This affects hit paths and cache coherence, and DAWG handles these issues with minimal modification to modern operating systems, while reducing the attack surface of operating systems to a small set of annotated sections where data moves across protection domains, or where domains are resized/reallocated. Only in these handful of routines, DAWG protection is relaxed, and other defensive mechanisms such as speculation fences are applied as needed."

Also at TechCrunch and Engadget.

Original Submission

Spectre, Meltdown Researchers Unveil 7 More Speculative Execution Attacks 12 comments

Back at the start of the year, a set of attacks that leveraged the speculative execution capabilities of modern high-performance processors was revealed. The attacks were named Meltdown and Spectre. Since then, numerous variants of these attacks have been devised. In tandem, a range of mitigation techniques has been created to enable at-risk software, operating systems, and hypervisor platforms to protect against these attacks.

A research team—including many of the original researchers behind Meltdown, Spectre, and the related Foreshadow and BranchScope attacks—has published a new paper disclosing yet more attacks in the Spectre and Meltdown families. The result? Seven new possible attacks. Some are mitigated by known mitigation techniques, but others are not. That means further work is required to safeguard vulnerable systems.

The previous investigations into these attacks have been a little ad hoc in nature: examining particular features of interest to provide, for example, a Spectre attack that can be performed remotely over a network or Meltdown-esque attack to break into SGX enclaves. The new research is more systematic, looking at the underlying mechanisms behind both Meltdown and Spectre and running through all the different ways the speculative execution can be misdirected.

Original Submission

New Side-Channel Leak: Researchers Attack Operating System Page Caches 10 comments

Some of the computer security boffins who revealed last year's data-leaking speculative-execution holes have identified yet another side-channel attack that can bypass security protections in modern systems.

While side channel attacks like Spectre and Meltdown exploited chip design flaws to glean privileged information, this one is hardware agnostic, involves the Windows and Linux operating system page cache, and can be exploited remotely, within limits.

In a paper provided to The Register in advance of distribution early next week through ArXiv, researchers from Graz University of Technology, Boston University, NetApp, CrowdStrike, and Intel – Daniel Gruss, Erik Kraft, Trishita Tiwari, Michael Schwarz, Ari Trachtenberg, Jason Hennessey, Alex Ionescu, and Anders Fogh – describe a way to monitor how certain processes access memory through the operating system page cache.

"We present a set of local attacks that work entirely without any timers, utilizing operating system calls (mincore on Linux and QueryWorkingSetEx on Windows) to elicit page cache information," wrote the researchers. "We also show that page cache metadata can leak to a remote attacker over a network channel, producing a stealthy covert channel between a malicious local sender process and an external attacker."

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Funny) by Runaway1956 on Saturday February 16 2019, @02:12PM (1 child)

    by Runaway1956 (2926) Subscriber Badge on Saturday February 16 2019, @02:12PM (#802033) Homepage Journal

    they are just speculating.

    "I didn't lose to him!" - The Donald referring to Trippin' Joe
    • (Score: 2) by c0lo on Saturday February 16 2019, @10:14PM

      by c0lo (156) Subscriber Badge on Saturday February 16 2019, @10:14PM (#802203) Journal

      "Spectral analysis", in sciency terms.

  • (Score: 2) by opinionated_science on Saturday February 16 2019, @02:14PM (1 child)

    by opinionated_science (4031) on Saturday February 16 2019, @02:14PM (#802034)

    the paper is actually a quite nice breakdown of modern cpu design.

    Having a model in principle, should make it possible to formally design a better CPU.

    *popcorn request*

  • (Score: 4, Insightful) by looorg on Saturday February 16 2019, @02:22PM (5 children)

    by looorg (578) on Saturday February 16 2019, @02:22PM (#802035)

    So what they are really saying is that "we know it's broken, but we ain't gonna do anything about it", probably some MBA is going to chime in there and then about how expensive it would be to fix and how it's not really an issue and that we should all just plonk down a lot of money to buy the new deluxe CPU with extra GHZ cause that the way God-man intended it to be.

    • (Score: 0) by Anonymous Coward on Saturday February 16 2019, @02:38PM (4 children)

      by Anonymous Coward on Saturday February 16 2019, @02:38PM (#802042)

      This isn't about consumers replacing their computers. Any major changes to CPU architecture would be expensive, and the new hardware would need to be sold enterprises and data centers. Those are the environments where Spectre and Meltdown are the most dangerous.

      Unfortunately, the companies that buy new hardware based on new CPU designs will not have a great market for selling their used equipment. "We replaced this stuff because it is a security risk ... so, you wanna buy it?"

      • (Score: 2) by zocalo on Saturday February 16 2019, @07:48PM (3 children)

        by zocalo (302) on Saturday February 16 2019, @07:48PM (#802151)

        Unfortunately, the companies that buy new hardware based on new CPU designs will not have a great market for selling their used equipment. "We replaced this stuff because it is a security risk ... so, you wanna buy it?"

        That's somewhat illogical. You're essentially saying CPU vendors shouldn't try and fix the flaw because of those few users that re-sell hardware rather than re-purposing/scrapping it when its initial reason for purchase has been completed. That only works if they have a cartel, otherwise the one eyed man is king and whichever CPU vendor has the least exposure to the flaw is going to have an advantage in selling their hardware. Keeping in mind that sales will be even higher if they can scupper the used market as a security risk as well, do you think Intel, AMD, ARM, et al care more about making a profit from new hardware, or a second hard market they no direct benefit from?

        UNIX? They're not even circumcised! Savages!
        • (Score: 0) by Anonymous Coward on Saturday February 16 2019, @08:40PM (2 children)

          by Anonymous Coward on Saturday February 16 2019, @08:40PM (#802171)

          You're essentially saying CPU vendors shouldn't try and fix the flaw because of those few users that re-sell hardware rather than re-purposing/scrapping it when its initial reason for purchase has been completed.

          Not at all. I'm saying the enterprise and data center markets are going to have to buy the new, more secure hardware and will get stuck with the old hardware (rather than selling it down market, as they currently do). So the new CPUs, which will be very expensive because of the significant costs associated with redesigning a major feature, will hit the large purchasers even harder than normal hardware upgrades do.

          • (Score: 3, Insightful) by zocalo on Saturday February 16 2019, @10:16PM (1 child)

            by zocalo (302) on Saturday February 16 2019, @10:16PM (#802204)
            Yes, they will. Tough. That's still no reason why CPU vendors shouldn't try and fix this for good, in the CPU hardware, ASAP. If anything the enterprise/DC customers are the ones who are most exposed to Spectre, especially if they are in the VM hosting business, and therefore they (and any customers) have the most to lose, so it's a simple business decision for them. I can't imagine a clueful customer looking for a secure VPS for something like an online ordering system is going to opt for a hosting provider that doesn't offer the latest in Spectre mitigations over one that does not, can you?

            Yes, they'll have to buy new hardware at some point (as will anyone else who cares about Spectre) and selling downmarket is going to mean either lower prices and/or finding buyers that don't care about Spectre, e.g. private compute clouds on segregated networks, for instance. What they can do though is phase it in gradually; "Latest & Greatest Spectre-proof CPU VPS - $20/mo" vs "Older CPU with Microcode/OS Spectre mitigations VPS - $15/mo". After that, it's just supply and demand, same as with any other phased roll out of the latest hardware.
            UNIX? They're not even circumcised! Savages!
  • (Score: 0) by Anonymous Coward on Saturday February 16 2019, @02:56PM (20 children)

    by Anonymous Coward on Saturday February 16 2019, @02:56PM (#802046)

    Can you make a CPU that runs fast and doesn't have this issue?

    One description of the problem is that the program can get the speculative parts of the CPU to gather protected information and then use it to adjust the CPU state.
    For example, a user program causing the CPU to read a bit in kernel memory and changing the cache state depending on the value.

    It is one thing to read beyond you privilege, but quite another to use the result.
    Perhaps results should include a tag of privilege and their use should require a matching tag from the instruction stream?

    That would require more logic, but hopefully not knowing something before you know it.

    • (Score: 3, Insightful) by Arik on Saturday February 16 2019, @03:12PM (11 children)

      by Arik (4543) on Saturday February 16 2019, @03:12PM (#802051) Journal
      "Can you make a CPU that runs fast and doesn't have this issue?"

      As I recall in the late 90s the fastest processors were more simplified, things like the DEC Alpha took a more direct path to speed and it did work. The market didn't reject them because they weren't fast. It was more about the industry addiction to blob compatibility.

      Someone more involved in modern RISC might chime in here.
      - Sig not found. Self destruct initiated. Please clear the area.
      • (Score: 0) by Anonymous Coward on Saturday February 16 2019, @03:56PM

        by Anonymous Coward on Saturday February 16 2019, @03:56PM (#802062)

        I would assume that Alpha would have been exploitable in this manner.

        But as you point out, it was simpler, so perhaps easier to fix?

      • (Score: 3, Insightful) by RS3 on Saturday February 16 2019, @05:02PM (5 children)

        by RS3 (6367) on Saturday February 16 2019, @05:02PM (#802082)

        Absolutely agree.

        > It was more about the industry addiction to blob compatibility.

        My angle: driven by short-term mass profit.

        RISC CPUs are also vulnerable, although slightly less so. We're currently seeing a rise in RISC, much of it ARM and ARM-Cortex processors- Chromebook, phones, more RISC-based laptops being announced. I think if RISC, like Alpha, had taken off 20 years ago I suspect we'd be no better off, because the vulnerabilities affect RISC too, and profit-driven CPU development would have ignored the pitfalls.

        To me it's the same old story. I often cite the space shuttle Challenger disaster where the engineers pleaded to cancel the launch, but greedy managers overruled them. Not sure how that political structure evolved where the people who truly _know_ what's going on do not have final decision power. I suspect many engineers knew about the Spectre and Meltdown problems but were hushed. I'd love to see the results of a future investigation. My cynical side perceives that the general public is becoming sick of and numb to all of the vulnerabilities, data leaks, etc., and it's not "viral" anymore and they just want to hear about the next hot topic.

        Still trying to understand the details. Articles are too long, too deep, or too vague. My hunch at this point is that the cache controllers do not honor the CPU's memory protection boundaries. If that's the case, I doubt that even CPU microcode can fix it, but a future hardware design is needed that incorporates the cache controller fully into the memory control system.

        • (Score: 2) by Arik on Saturday February 16 2019, @08:15PM (4 children)

          by Arik (4543) on Saturday February 16 2019, @08:15PM (#802163) Journal
          "RISC CPUs are also vulnerable, although slightly less so."

          Which RISC CPUs use speculative execution?

          I don't remember either the Alpha or the PPC using it. Rather thought it was introduced specifically to make the superscalar x86 architecture work.
          - Sig not found. Self destruct initiated. Please clear the area.
          • (Score: 2) by RS3 on Saturday February 16 2019, @08:51PM (2 children)

            by RS3 (6367) on Saturday February 16 2019, @08:51PM (#802175)

            Oh gosh, Arik, thanks for asking, but I'm not sure why this happens so much online: I never said RISC CPUs use speculative execution. I was only parroting what I read in many online articles about vulnerabilities, and they all say that RISC is also vulnerable.

            That said, after a quick search on terms like "RISC" "ARM" "vulnerable" you can find many articles. Many will state that ARM is vulnerable to Spectre but not Meltdown. Many refer to ARM's "speculative execution". ARM is generally considered RISC. I'm not sure how to define RISC vs. CISC, and it may be that speculative execution is okay to be included in a pedantically defined RISC processor. Here's some good reading on the subject- especially the paragraphs containing "RISC" and the AMD 29000 : []

            • (Score: 2) by Arik on Saturday February 16 2019, @09:04PM (1 child)

              by Arik (4543) on Saturday February 16 2019, @09:04PM (#802179) Journal
              Thanks for the reply. AC already provied an interesting link taking it back further. ARM is generally considered RISC and I knew some ARM architectures did it, but few if any implementations are "pure" so I thought it was a reasonable question.
              - Sig not found. Self destruct initiated. Please clear the area.
              • (Score: 3, Interesting) by RS3 on Tuesday February 19 2019, @07:40AM

                by RS3 (6367) on Tuesday February 19 2019, @07:40AM (#803400)

                Sorry- verbal skills are my weakest suit. I try to be as clear as possible and people always find a way to misunderstand. Your question was absolutely okay- I was just trying to clarify what I wrote. I keep having a problem here (mostly here, and it just happened 2 more times) where people extrapolate from something I write, but then pin that extrapolation back on me, in a kind of accusatory way, and demand I defend something I never wrote, and is false and I disagree with. You weren't being accusatory at all; I'm just frustrated that I can't seem to write clearly the first time around.

                What I meant to write was: there are many vulnerabilities, not just speculative execution, so a CPU which does not do speculative execution can still be vulnerable.

                And repeating myself from earlier, it seems the problem is that the cache controller does not know memory protection boundaries, and if that's true, that's a horrible error. I'm still searching for a clarification on that possibility.

          • (Score: 2, Interesting) by Curlsman on Monday February 18 2019, @08:55PM

            by Curlsman (7337) on Monday February 18 2019, @08:55PM (#803173)

            Alpha EV6 (21264) used out-of order execution:
            "The Alpha 21264 microprocessor is a highly out-of-order, superscalar implementation of the Alpha architecture."

            And []

            And the OpenVMS OS designers believe they are resistant:
            "VSI OpenVMS is NOT vulnerable to this issue, primarily due to its different, four-mode architecture. Specifically, VSI OpenVMS is protected against CVE-2018-8897 because it does two things differently than other operating systems:

            1) OpenVMS doesn’t rely on the CS pushed in the interrupt stack frame to determine the previous mode. This means OpenVMS cannot be tricked into believing it was already in kernel mode when it was not, which is central to this vulnerability.

            2) OpenVMS uses a different method to switch GSBASE; OpenVMS always performs the switch and makes sure the user-mode GSBASE is always updated to match the kernel-mode GSBASE."

      • (Score: 1, Informative) by Anonymous Coward on Saturday February 16 2019, @08:38PM (3 children)

        by Anonymous Coward on Saturday February 16 2019, @08:38PM (#802170)

        Alpha had simple branch prediction. They wanted to go all in [] with it for the EV8.

        • (Score: 2) by Arik on Saturday February 16 2019, @08:44PM (2 children)

          by Arik (4543) on Saturday February 16 2019, @08:44PM (#802174) Journal

          Well the Amiga proved you don't actually need a fast CPU if you design everything else around it I suppose.
          - Sig not found. Self destruct initiated. Please clear the area.
          • (Score: 2) by RS3 on Saturday February 16 2019, @08:58PM (1 child)

            by RS3 (6367) on Saturday February 16 2019, @08:58PM (#802176)

            That's a great point. I never had my hands on an Amiga but always admired them. I think they made better use of sort of distributed processing with more intelligent peripherals, but I may be wrong. Probably much cleaner tighter code too. I've always been surprised (annoyed) by how much work most CPUs do that could be done by auxiliary processors. I can't remember specifics, but I clearly remember machines where the main (and only) CPU did RAM refresh, CRT character scanning, etc.

            • (Score: 2) by Arik on Saturday February 16 2019, @10:09PM

              by Arik (4543) on Saturday February 16 2019, @10:09PM (#802200) Journal
              No, you're right.

              It had dedicated chipsets to offload much of the work onto, and tight code? Haven't examined the code though IIRC it was leaked a few years ago, but that was definitely my impression. This was the end of the classic microcomputer days, OS code wasn't something written in a high level language then trusted to the compiler, it was typically hand massaged by people that read 8-bit. Even application code normally got that treatment, after some profiling to see which loops got executed most often (we're all lazy and we'd often not get around to optimizing the bits that didn't get called often. Unless we were running out of storage space.)

              It had sound and video systems that pretty much did their job all on their lonesomes - the cpu pointed them in the right direction and they took it from there. The CPU doesn't need to be all that fast in that position - it just needs to do what a CPU traditionally does, what a Z80 did well enough and fast enough for most things. It executes the main logic of the program and runs the shows behind the scenes. You want a video? Point the vidcard at the file and tell it to go. Want to read a bunch of data from the HDD? Tell the controller what you need and where you want it put, check back every few cycles to see if it's done yet.

              - Sig not found. Self destruct initiated. Please clear the area.
    • (Score: 3, Insightful) by Dr Spin on Saturday February 16 2019, @03:17PM (1 child)

      by Dr Spin (5239) on Saturday February 16 2019, @03:17PM (#802054)

      Can you make a CPU that runs fast and doesn't have this issue?

      Can you win the race if you cheat?

      Essentially, the risk is due to speculation or otherwise in one thread impacting performance in another. This does not need to be possible. However, if you allow a thread to use data that is in the cache because another thread put it there, then you are on the slippery slope to hell - even if you are destined to get there quicker, this might not be a good plan! Threads need to be wholly and completely isolated.

        "But it is not a multi-user environment" has been shown not to be a valid excuse - its not YOUR code running in the browser - the code in the browser belongs to a whole bunch of different malware promoters.
      While not using browsers at all might help, there are in fact, other scenarios (cloud serving) that are even higher risk.

      (Asking strangers to hold your wallet doesn't necessarily work out well either).

      Guns don't kill thousands, presidents kill thousands.
      • (Score: 2) by RS3 on Saturday February 16 2019, @05:16PM

        by RS3 (6367) on Saturday February 16 2019, @05:16PM (#802089)

        The OS is supposed to "sandbox" user processes. That's been a big gripe of mine since 1990ish. Even generic Linux kernels don't do it properly, so we have "hypervisors" which are modified Linux kernels. Some hypervisors are forked Linux kernels, or written from scratch. The point is: IMHO ALL OSes should have hypervisor incorporated and hypervisors and OS "virtualization" (VMware, Xen, etc.) shouldn't be needed.

        That said, for a hypervisor, or any software-based memory protection to work, the CPU _HAS_ to honor memory boundaries, regardless of cache or speculative execution.

    • (Score: 2, Interesting) by Anonymous Coward on Saturday February 16 2019, @04:05PM (2 children)

      by Anonymous Coward on Saturday February 16 2019, @04:05PM (#802064)

      Maybe we can make a software fast enough?
      Seriously, There are lots of power used because some programmer had deadline too close and used another library on a framework on a library on a non-standard extension to the framework.
      From my experience in IT studies, students of 2nd year have a small assembler course. Most of them have no idea how the low-level program operates nor how to program standard devices. Maybe we should go back to teaching programmers, not users of libraries?
      I know temptation is big. Rich bosses buy better and better hardware for developers, for open source too, but it has a price, and speculative execution errors are a tip of the iceberg.

      • (Score: 0) by Anonymous Coward on Sunday February 17 2019, @01:49PM (1 child)

        by Anonymous Coward on Sunday February 17 2019, @01:49PM (#802490)

        Not sure where to drop this so I'm putting it here just because.

        Big family, lots of computing. Mac, Windows, Linux. Yes.

        Worked at IBM back in the mid-late '80s on big mainframes, RS6K, AS400...
        We discovered that clients could save millions by putting an AS400 emulator on an RS6000 and also benefit from monstrous performance improvements. Management killed that project and slapped a gag order on us real quick. (Transaction Protocol Council, TPC/A, TPC/C -- I was on the committee that created those tests and also the reports.)

        I'm the type who generally laughs at conspiracy theorists, but I know enough about the nuts and bolts of software, OS's and also upper management types (corporate and government) to recognize some peculiar patterns. When both my windows and Mac os's, at different locations, on different networks start glitching in the same way at the same times, something nefarious is definitely going on. (I don't use the Linux box enough to see the patterns there, so can't say about that one, but some of what systemd does seems awfully suspect to me)

        I solved the glitching problem by getting a 2007 Mac Pro and a 2008 Macbook Pro and using OSX 10.6 on both of them. It was like a breath of fresh air. These are the fastest computers in my house (and also I have up-to-date windows and a modern Macbook Pro and a 2013 Mac Pro as well.) BUT, The older machines are only faster if they are NOT connected to the Internet. The second I plug them into the net and launch a web browser -- even to just the google home page -- machine speed goes noticeably slower for all the software on it.

        So instead, the computers I use the most are at least ten years old and connected by wire to an internal network that is NOT connected to the internet. They run great. Added benefit -- I don't have to constantly re-learn how to use my software after every other update. When I need data from on-line, I get it with the sacrificial laptop and transfer the data via SD card. Its a little inconvenient, but now there's no more glitching. I can work in peace on a good, snappy system.

        If you use ANY modern computing system, you are being eavesdropped, monitored, manipulated, who knows what. The processor is only the tip of the iceberg. We live in dangerous times.

        (You remember that scene in the Snowden movie where they put their phones in the microwave? Amateurs! The phone can tell when it's in a faraday cage and can still hear soundwaves, and record them, and store them until it's not in a faraday cage anymore, then transmit them. I honestly don't care if they want to monitor me (they may even have a good reason for doing it) but I draw the line when they start impacting my ability to do good work by glitching my system up. That's when I cut them off, or at least, raise the bar so they have to work a little harder.)

        • (Score: 2) by takyon on Sunday February 17 2019, @09:37PM

          by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Sunday February 17 2019, @09:37PM (#802622) Journal

          (You remember that scene in the Snowden movie where they put their phones in the microwave? Amateurs! The phone can tell when it's in a faraday cage and can still hear soundwaves, and record them, and store them until it's not in a faraday cage anymore, then transmit them. I honestly don't care if they want to monitor me (they may even have a good reason for doing it) but I draw the line when they start impacting my ability to do good work by glitching my system up. That's when I cut them off, or at least, raise the bar so they have to work a little harder.)

          Just wrap the phone in foil and put in the fridge or something. Then move into another room. Signal will be dead and it's unlikely to pick up your conversation unless it has year-2050-grade microphone arrays.

          [SIG] 10/28/2017: Soylent Upgrade v14 []
    • (Score: 3, Interesting) by Anonymous Coward on Saturday February 16 2019, @05:45PM

      by Anonymous Coward on Saturday February 16 2019, @05:45PM (#802106)

      You would need a "reset cache".

      The problem is that the speculative branch does things that shouldn't have been done -- like fetch data from RAM that turns out being unnecessary. Then, check to see how fast it is to access something similar from RAM -- no page fault, no delay? It much already be in cache!... simplified, but for example. The problem is: data from RAM was cached. It's now in the cache. The fixes have been to flush the cache when switching program contexts, like from kernel code to user code.

      To fix it, you would need a reset-cache -- anything that was changed in the speculative branch would have to be reset to how it was before the speculative execution took place. So, double your CPU cache. Right? You could shrink it somewhat by changed-cache-block tracking, and keeping track of which execution path changed which blocks, for all the execution paths (usually two?). Resetting anything that changed means you would trigger the same page faults, have the same latency in accessing data as if the speculative execution never happened.

      You could also have a different copy of the cache for each speculative execution branch. Or maybe L1 cache or L2 cache only. Perhaps copy-on-write, completely copying the entire working cache from the execution branch every time there's a speculative execution event.

      It's expensive. Computationally, silicon, duplication of data, it's expensive.

      Bonus: a previously undiscussed speculative execution data-disclosure issue, cache clearing. If you know a way to cause a cache conflict, then you can cause the CPU cache to be cleared of certain data. Suppose you execute a branch that would cause data to be fetched from RAM and placed in a known location in CPU cache shared with other known data, and the other branch doesn't place the data there. Then try to access the original data again -- if it page faults (has to re-fetch because cleared from cache), then speculative data disclosure. Much slower than the other versions, but regardless.

    • (Score: 0) by Anonymous Coward on Sunday February 17 2019, @05:04AM (1 child)

      by Anonymous Coward on Sunday February 17 2019, @05:04AM (#802369)

      The problem is that the speculative execution unit is allowed to access RAM that the program is not authorized to access. Perhaps it is difficult or expensive to apply memory management to the speculative execution unit, but that seems to me to be the obvious solution.

      • (Score: 0) by Anonymous Coward on Sunday February 17 2019, @01:35PM

        by Anonymous Coward on Sunday February 17 2019, @01:35PM (#802486)

        "Perhaps it is difficult..."

        A system call is a coordinated dance between the cpu and os.
        The goal is to make it quick to move between the user and kernel space, but only thru carefully defined call gates.
        The dance takes many clocks and so is in many stages of the pipeline at once.
        Unless the S/W is able to greatly reduce the rate of system calls, speed requires that this involve speculation.

        Speculation means that memory cycles get started before a memory protection check.
        This means not verifying that if the address accesses protected memory, then the machine state and code that requests the cycle must be on the right side of the call gate.
        This is necessary because these things may not be available early enough in the pipeline to prevent the memory cycle start.
        This was thought to be ok because eventually the fruits from the cycle result would be ignored.
        These bugs demonstrate that they are not completely ignored because they can effect the cache state.

        This seems to me like the 3 bears.
        Checking before the cycle start is too early and very slow.
        Checking the state as we do is too late, but very fast.
        Checking after the cycle, but before using the result may be just right.

        In other word, make it ok to start the kernel memory read in user state, but tag the fruits of the cycle as to who and where they came from so that later stages in the pipeline can be more aware of what is happening.
        Later stages needs to include the cache update, but the difficult part is what else?

  • (Score: 2) by Whoever on Saturday February 16 2019, @09:01PM (1 child)

    by Whoever (4524) on Saturday February 16 2019, @09:01PM (#802178) Journal

    Google has a special deal on single tenant nodes .....

    • (Score: 0) by Anonymous Coward on Saturday February 16 2019, @09:47PM

      by Anonymous Coward on Saturday February 16 2019, @09:47PM (#802190)


  • (Score: 2) by Azuma Hazuki on Saturday February 16 2019, @11:04PM (1 child)

    by Azuma Hazuki (5086) on Saturday February 16 2019, @11:04PM (#802228) Journal

    Something I'd been wondering since this came out: isn't the solution not to drop speculative execution entirely, but just to make sure parts of the chip can't read what they have no business reading?

    Ever since I learned what NUMA was, it's occurred to me that individual systems can look like an entire LAN in some ways. And with the ring bus, various DSPs, and now Infinity Fabric and its inevitable future kissing cousins, this analogy only looks set to become even stronger. As no network is secure without a firewall, access controls, and ideally some sort of IDS, maybe CPUs need to be designed this way too.

    And *properly* designed so that this stuff is default-deny; the *last* thing we need is some snooping ring-negative-one coprocessor like the IME on steroids controlling access, because when THAT inevitably gets owned, the entire security model is busted and we're back to square one.

    I am "that girl" your mother warned you about...
    • (Score: 0) by Anonymous Coward on Sunday February 17 2019, @07:05AM

      by Anonymous Coward on Sunday February 17 2019, @07:05AM (#802401)

      Or just make cache cheaper and smaller.

  • (Score: 2) by hendrikboom on Monday February 18 2019, @04:09PM

    by hendrikboom (1125) Subscriber Badge <> on Monday February 18 2019, @04:09PM (#803013) Homepage Journal

    This kind of problem was known to some already in the 70's. Security was a concern.

    During a discussion about a proposed secure OS that would be capable of "letting the CIA and the KGB safely use the same machine", someone raised the prospect that one process could leak data by using memory in such a way as to encode bits by making the machine's paging system alternatively thrash or not, while another process would measure performance and thus read the bits.

    The difference is that everything is now faster, making this practical.

    -- hendrik