Linus Torvalds rejects 'beyond stupid' AWS-made Linux patch for Intel CPU Snoop attack
Linux kernel head Linus Torvalds has trashed a patch from Amazon Web Services (AWS) engineers that was aimed at mitigating the Snoop attack on Intel CPUs discovered by an AWS engineer earlier this year. [...] AWS engineer Pawel Wieczorkiewicz discovered a way to leak data from an Intel CPU's memory via its L1D cache, which sits in CPU cores, through 'bus snooping' – the cache updating operation that happens when data is modified in L1D.
In the wake of the disclosure, AWS engineer Balbir Singh proposed a patch for the Linux kernel for applications to be able to opt in to flush the L1D cache when a task is switched out. [...] The feature would allow applications on an opt-in basis to call prctl(2) to flush the L1D cache for a task once it leaves the CPU, assuming the hardware supports it.
But, as spotted by Phoronix, Torvalds believes the patch will allow applications that opt in to the patch to degrade CPU performance for other applications.
"Because it looks to me like this basically exports cache flushing instructions to user space, and gives processes a way to just say 'slow down anybody else I schedule with too'," wrote Torvalds yesterday. "In other words, from what I can tell, this takes the crazy 'Intel ships buggy CPU's and it causes problems for virtualization' code (which I didn't much care about), and turns it into 'anybody can opt in to this disease, and now it affects even people and CPU's that don't need it and configurations where it's completely pointless'."
(Score: -1, Redundant) by Anonymous Coward on Friday June 05 2020, @10:30AM (22 children)
What's the problem again? People looking for things to get outraged about or something?
People still keep using Intel anyway.
(Score: 1, Insightful) by Anonymous Coward on Friday June 05 2020, @10:51AM (21 children)
The real solution is 'Fix the CPUs, and where you can't fully isolate the kernel/non-related userspace processes from each other, or isolate userspace applications from performance related information. Both of these techniques limit or eliminate the issues in queston, but cause massive performance regressions akin to running in-order processors again after 20-22 years.
Having said that, the real solution is simply to go back to simplified in-order x86 chips, get rid of all the 'code obfuscation' bullshit so run-time optimizations can be used, and then use profiling to replace hot functions where a better codepath is possible, not unlike java or mono already do. With an improved prefetcher and the decreased transitor complexity of an in-order chip, we can have more cores or higher clock rates, and with the right prefetching techniques the cache can be ready when the next line of code needs data without hitting the bus. Another benefit is that in-order code will reduce or eliminate difficult to reproduce errors since race conditions in either hardware or software will be fixed issue.
But no, performance at all costs must be kept!
(Score: 5, Funny) by takyon on Friday June 05 2020, @11:03AM
Buy AMD, disable SMT, and pray to Linux Torvalds.
[SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(Score: 3, Interesting) by epitaxial on Friday June 05 2020, @12:12PM (2 children)
Maybe they should look at what IBM does with their POWER architecture. Hell you can reflash some models from big to little endian if you want.
(Score: 5, Informative) by RamiK on Friday June 05 2020, @01:33PM (1 child)
But unlike the x86, the POWER(8+) architecture's CPI stacks provide more than enough granularity for the kernel's scheduler to compensate for cache flushes: https://openpowerfoundation.org/wp-content/uploads/2015/03/Sadasivam-Satish_OPFS2015_IBM_031615_final.pdf [openpowerfoundation.org]
compiling...
(Score: 2) by RS3 on Friday June 05 2020, @02:54PM
Interesting ideas, but it looks like even POWER are vulnerable to some of the many attack vectors: https://www.ibm.com/blogs/psirt/potential-impact-processors-power-family/ [ibm.com]
Does anyone know, are RISC CPUs generally less vulnerable to attacks?
From only a little bit of research, it looks like Itanium are some of the least vulnerable CPUs out there: https://secure64.com/2018/01/09/not-vulnerable-intel-itanium-secure64-sourcet/ [secure64.com]
(Score: 1, Funny) by Anonymous Coward on Friday June 05 2020, @01:07PM
Congrats, you've just described one of the vulnerabilities in the CPU ;)
(Score: 3, Funny) by DannyB on Friday June 05 2020, @03:43PM (14 children)
I remember a time once, in a different millennium, when there was an effort to have a reduced simpler set of machine instructions, and do optimizations in compilers?
But Intel's monopoly took over the world. And made everything worse.
Linux helps by having a huge software base that can (mostly) be recompiled for different architectures. Thus threatening the possibility that alternate ISAs could be used. Restoring Freedom and Competition to the galaxy.
Don't put a mindless tool of corporations in the white house; vote ChatGPT for 2024!
(Score: 2) by maxwell demon on Friday June 05 2020, @05:10PM (2 children)
Wasn't Intel's Itanium meant to be exactly this?
The Tao of math: The numbers you can count are not the real numbers.
(Score: 0) by Anonymous Coward on Friday June 05 2020, @05:17PM
So fitting was nickname that to this day I read the actual name as a reference to the Titanic.
(Score: 5, Interesting) by TheReaperD on Friday June 05 2020, @05:23PM
No, Itanium was an IP power grab on Intel's part. The licencing was such that Intel would own the IP of all compilers and their code for Itanium systems. Jumping architectures was a roadblock, but it was really the IP issues that spelled Itanium's doom when AMD offered an alternative with the x86-AMD64 instruction set. (And yes, that is the 64-bit x86 instruction set's name despite Intel's attempt to rename it.)
Ad eundum quo nemo ante iit
(Score: 0) by Anonymous Coward on Friday June 05 2020, @05:14PM
Except we did end up with RISC processors - with instruction decoders strapped to the front end.
(Score: 4, Interesting) by Grishnakh on Friday June 05 2020, @05:18PM (9 children)
I remember a time once, in a different millennium, when there was an effort to have a reduced simpler set of machine instructions, and do optimizations in compilers?
But Intel's monopoly took over the world. And made everything worse.
You have a very distorted view of history. I worked at Intel during this time; that "reduced simpler set of machine instructions" with "optimizations in compilers" is EXACTLY what Intel was trying to push on everyone with the Itanium CPUs and their EPIC architecture. You know what happened? No one wanted these CPUs. Instead, people were screaming for 64-bit CPUs, and Intel told them, "we have those! They're called Itaniums! Just buy those!!" but people didn't want them (for good reasons: expensive, not compatible with x86 except through a crappy little add-on core that was really slow). They also said "if that's too expensive, just use EM64T!" (which was their shitty 64-bit extension on 32-bit x86 CPUs). Then, AMD came up with their 64-bit version of the x86 architecture, and Intel was *forced* to adopt the amd64 ISA.
If you want to blame someone, blame AMD. But AMD was giving customers what they wanted: something much like the existing x86-32 CPUs, but in 64-bit, and with some significant improvements (like a lot more registers). And Intel/HP's attempt at what you describe was a total disaster. Itanium never achieved what it promised; it turned out that trying to do everything in the compiler just isn't feasible.
(Score: 0) by Anonymous Coward on Friday June 05 2020, @07:37PM
It wasn't the RISC part of "relying on compiler optimizations" that made the Itanic sink, it was the EPIC part. You know, the part where Intel said "I'll see your reduced instruction set, and I'll raise you a VLIW", thereby turning the what-was-supposed-to-be-reduced-and-simple into a giant hellhole of NOPs because there's only so much parallellism you can extract from everyday, low-level code (which is what everyone is still writing, isn't it?).
Many compilers already had trouble filling MIPS' delayed-branch slot, what was Intel smoking when they decided "it should be a no-brainer for a compiler to always issue three instructions between an assignment to a variable and its use, or between a computation and writing the result of that computation to memory. Oh, and you can't have any branches in between"?
(Score: 2) by sjames on Friday June 05 2020, @10:47PM (7 children)
It didn't help that Intel so over-promised the Itanic, and then did some under the table dealing such that Alpha and HP's 64 bit RISC offerings died in anticipation.
Then, of course, when Itanic refused to come even close to it's performance promises (Even Intel couldn't come up with a compiler that could produce fast code for Itanic), they refused to admit even to themselves that it wasn't really worth ~$10K.
Given pricing on Itanic, AMD would have seen market demand for 64 bit extensions to x86 (at x86 prices) either way.
(Score: 2) by Grishnakh on Saturday June 06 2020, @05:24PM (6 children)
Under the table dealing? I'm not sure what you're talking about; Itanic was a joint venture between HP and Intel. At the time, HP had acquired the Alpha through their acquisition of DEC (or what was left of it), and of course they already had their own 64-bit RISC stuff (PA-RISC). I don't see why they needed any "under the table" dealings to decide they wanted to move to the great, new technology that was Itanium (which of course turned out not to be great at all). Any rational tech company is going to want to end-of-life older, obsolete technologies in favor of newer, better ones. There's a reason Intel doesn't bother with 32-bit CPUs any more, for instance, or the dreadful P4 with Netburst architecture. I don't see how there was anything nefarious here, just bone-headed: they made far too large a gamble that EPIC would work out technically, and it didn't.
Intel's problem was that they were extremely overconfident in their market position (much like IBM a decade earlier), thinking customers were going to stick with them no matter what they did. So they refused to work on good 64-bit extensions to x86, thinking they could just bully customers into buying Itanic, or making-do with EM64T. ("You don't really need anything more than this" was basically what they told customers. This is generally not a good way to run a business; it's the mindset of a monopolist.) Then when AMD came out with x86-64, they hung on for a little while trying to downplay it and convince the public that they didn't need it, but finally they capitulated when they were losing too much marketshare. It was extremely bad leadership at the time, I think largely attributable to Craig Barrett (the CEO). Interestingly, Barrett was an engineer before going into management; his successor, Paul Otellini, really improved things, and he had a finance background IIRC.
Anyway, the whole debacle reminded me of IBM back in the early 90s: they were losing the "clone wars" in desktop PCs, so they got the bright idea of coming out with the PS/2 computers. These were *much* more expensive than clones, and also very incompatible, with 2.88MB floppy drives (which were backwards compatible with 1.44MB and 720k disks), and the MCA bus architecture, which they had patented and expansion cards for which cost a fortune. They refused to join with the rest of industry and use the competing EISA standard (an upgrade to the older ISA which the XT and AT used). Somehow they thought that the market would simply abandon the clones and buy these much more expensive IBMs and accessories for them, which of course didn't happen. Around a half-decade later, IBM finally threw in the towel and sold their PC business off to Lenovo.
(Score: 2) by sjames on Saturday June 06 2020, @06:37PM (5 children)
Hanlon's razor has a tendency to come down to a judgement call, but to me it looks an awful lot like chinese walls to obscure a simple pay the competition to leave the market play, which would surely have triggered anti-trust investigations without the indirection. Of course, when Itanic sank, any such investigation became moot anyway.
Honestly, Itanic's fate shouldn't have been that much of a surprise. After all, the 8086 was supposed to be an I/O co-processor for the much more baroque iAPX (not unlike channel I/O in an IBM) but the iAPX432 was such a lumbering Hippo that engineers noticed it was faster to offload the computation to the channel processors (8086).
Basically, IBM, Intel, and then Microsoft more or less stumbled into relevance.
(Score: 2) by Grishnakh on Saturday June 06 2020, @11:51PM (4 children)
but to me it looks an awful lot like chinese walls to obscure a simple pay the competition to leave the market play, which would surely have triggered anti-trust investigations
Again, I do not understand what you're getting at here. Intel did not pay off AMD in any way: AMD released the x86-64. And Intel wasn't in competition with HP: they teamed up and worked on Itanium jointly. HP engineers were working with Intel engineers on the project.
(Score: 2) by sjames on Sunday June 07 2020, @02:24AM (3 children)
Intel wasn't in competition with HP, but if they didn't throw HP a bone, they would be when they got in to PA-RISC's market space. Or DEC's. You don't think it's a coincidence that 2 significant 64 bit CPUs just happened to exit the market when Intel wanted to introduce Itanic, do you?
AMD shouldered it's way in later when they saw that Intel was in a tarpit with 64bit for the high end server market, giving them a chance to grab up the high end PC/workstation market with the 64bit x86 compatible CPU everyone really wanted..
(Score: 2) by Grishnakh on Sunday June 07 2020, @04:50PM (2 children)
You don't think it's a coincidence that 2 significant 64 bit CPUs just happened to exit the market when Intel wanted to introduce Itanic, do you?
You need to go re-read your history. 2 significant 64-bit CPUs exited the market because the companies that owned them wanted to replace them with a newer, better one. It's as simple as that. You mention DEC. There was no DEC at this time. DEC died (probably due to a combination of mismanagement, other big-iron competitors, and the big-iron market drying up due to x86 servers), and pieces of it were bought up by both HP and Intel. HP and Intel decided to partner and work on Itanium. This wasn't some kind of conspiracy, it's what businesses do sometimes. It's no different than Chrysler and Mitsubishi forming Diamond Star Motors and building cars together. There's no law preventing businesses from forming partnerships.
AMD shouldered it's way in later when they saw that Intel was in a tarpit with 64bit for the high end server market, giving them a chance to grab up the high end PC/workstation market with the 64bit x86 compatible CPU everyone really wanted..
No, not really: Intel was trying to push the server market to Itanium and not getting much success because it didn't perform as well as promised and it cost a fortune. Intel didn't want there to be any cheap 64-bit CPUs on the market, and wanted the high profits that Itanium promised, and tried to act like a monopolist and force customers to use Itanium. HP partnered with them on this scheme. This is why they killed both PA-RISC and Alpha instead of continuing development of them. But Intel was arrogant and stupid, and somehow didn't think AMD would simply make their own 64-bit x86, so when they did, they were caught off-guard. They tried to downplay it for a while and convince customers they didn't really need it and that Itanic was better, but of course that didn't go very far. Basically, Intel showed a lot of hubris at the time, and it bit them in the ass.
(Score: 2) by sjames on Sunday June 07 2020, @06:57PM (1 child)
Your last paragraph actually summarized my position. HP and Intel did their best to kill competing processors so they could drive the price for 64 bit CPUS higher, but Itanic wasn't up to the task. AMD saw an opportunity to undercut them with a CPU that would definitely perform at least as well on existing 32 bit code as the current line of processors AND pave the way to 64 bit on the desktop.
The HP-Intel partnership was a typical corporate business deal forged in Hell. Intel believed they could design and fab the processor, but needed HP to kill off competing CPUs. Both it's own PA-RISC and to buy up and shelf Alpha. The "technical collaboration" was mostly paper to wipe the smell of sulfur off of the deal.
To this day, Intel is still gagging on the bitter pill of having to license x86-AMD64 from AMD.
(Score: 2) by Grishnakh on Sunday June 07 2020, @10:39PM
HP and Intel did their best to kill competing processors so they could drive the price for 64 bit CPUS higher, but Itanic wasn't up to the task. AMD saw an opportunity to undercut them with a CPU that would definitely perform at least as well on existing 32 bit code as the current line of processors AND pave the way to 64 bit on the desktop.
Yes, I'd say this is accurate. But it wasn't some kind of secretive collusion or anything like that; DEC folded, the pieces went to HP and Intel, and they worked together (very much in the open) to develop and push Itanic as the new chip for big servers. Intel completely ignored customer calls for a 64-bit processor as computing needs were growing too large for the 4GB memory limit of the time, and tried to push PAE as a good-enough workaround. AMD saw the market demand and made a chip that the market wanted.
(Score: 2) by shortscreen on Friday June 05 2020, @06:53PM
All the cloud folks can go back to using Pentium MMX for their glorious VMs. Single-user machines that run the same code everyday can use the leaky, melty, high-performance CPUs.