Linus Torvalds rejects 'beyond stupid' AWS-made Linux patch for Intel CPU Snoop attack
Linux kernel head Linus Torvalds has trashed a patch from Amazon Web Services (AWS) engineers that was aimed at mitigating the Snoop attack on Intel CPUs discovered by an AWS engineer earlier this year. [...] AWS engineer Pawel Wieczorkiewicz discovered a way to leak data from an Intel CPU's memory via its L1D cache, which sits in CPU cores, through 'bus snooping' – the cache updating operation that happens when data is modified in L1D.In the wake of the disclosure, AWS engineer Balbir Singh proposed a patch for the Linux kernel for applications to be able to opt in to flush the L1D cache when a task is switched out. [...] The feature would allow applications on an opt-in basis to call prctl(2) to flush the L1D cache for a task once it leaves the CPU, assuming the hardware supports it.But, as spotted by Phoronix, Torvalds believes the patch will allow applications that opt in to the patch to degrade CPU performance for other applications."Because it looks to me like this basically exports cache flushing instructions to user space, and gives processes a way to just say 'slow down anybody else I schedule with too'," wrote Torvalds yesterday. "In other words, from what I can tell, this takes the crazy 'Intel ships buggy CPU's and it causes problems for virtualization' code (which I didn't much care about), and turns it into 'anybody can opt in to this disease, and now it affects even people and CPU's that don't need it and configurations where it's completely pointless'."
Linux kernel head Linus Torvalds has trashed a patch from Amazon Web Services (AWS) engineers that was aimed at mitigating the Snoop attack on Intel CPUs discovered by an AWS engineer earlier this year. [...] AWS engineer Pawel Wieczorkiewicz discovered a way to leak data from an Intel CPU's memory via its L1D cache, which sits in CPU cores, through 'bus snooping' – the cache updating operation that happens when data is modified in L1D.
In the wake of the disclosure, AWS engineer Balbir Singh proposed a patch for the Linux kernel for applications to be able to opt in to flush the L1D cache when a task is switched out. [...] The feature would allow applications on an opt-in basis to call prctl(2) to flush the L1D cache for a task once it leaves the CPU, assuming the hardware supports it.
But, as spotted by Phoronix, Torvalds believes the patch will allow applications that opt in to the patch to degrade CPU performance for other applications.
"Because it looks to me like this basically exports cache flushing instructions to user space, and gives processes a way to just say 'slow down anybody else I schedule with too'," wrote Torvalds yesterday. "In other words, from what I can tell, this takes the crazy 'Intel ships buggy CPU's and it causes problems for virtualization' code (which I didn't much care about), and turns it into 'anybody can opt in to this disease, and now it affects even people and CPU's that don't need it and configurations where it's completely pointless'."
Amazon are shits and scammers - from experience, I don't trust anything from them.
Let them into the kernel and I expect that they would in time turn your PC into an Amazon shopping appliance.
Time to break up Amazon. Monopolies are wrong!
I only bought Amazon for friends once or twice, since I boycott them for the retarded one click buy patent. The only time I needed one item I scoured the web for the vendor until it popped out and bought direct. Also price had gone down a euro. You don't need Amazon, don't use it.
If they attempt abusing their monopoly, for example if they somehow turn to be implicated in the old school economy operation based on covid 19, then yeah cut it to pieces.
Also vote some politician that instead of mewmewing about equality while fucking the middle class, abolishes custom tax deals, abolishes state funding, enforces direct management of army and emergency service providers and prints money at no debt and/or lets pay taxes in something else.
mewmewing about equality while fucking the middle class
We are the Capitalists.
You will be proletarianized.
Your cultural and genetic distinctiveness will be added to our own.
Resistance is futile.
OMG don't scare them!
Linus' efforts to exclude shits, scammers, and incompetents form kernel contributions is problematic.
Down with the fascist dictator Torvalds, how dare he criticize a patch of peace from the virtuous Amazon who signaled support for peaceful protests that destroy brick and mortar stores.
There was a time when we'd periodically get stories that amounted to "WAAAAA! Linus was mean to me!" And the funny thing was that when I looked into the actual discussion in question, every single time Linus was right and the person making the complaint was wrong. I'm sure he has made some mistakes, but he comes across as quite open to technical arguments when he does so.
If his couple of months of exile taught him how to be nicer about telling someone with a bad patch where to stick it, that's fine, but he's repeatedly demonstrated why he's still the BDFL of Linux despite various efforts to unseat him.
This is something that a few $contributions won't solve is it?
Some big $$$ companies that wanted to hijack the Linux kernel development have tried, but thankfully, the community wasn't buying their bullshit.
Instead we have the travesty known as systemd, thanks to incompetent buffoons on the Red Hat/IBM payroll. I won't mention any of these motherfucking pieces of shit by name, but one is Lennart Poettering. Yes, I know systemd isn't the kernel for all those pedantic folks.
Seriously though, remember that time Linus took off work for like 2 weeks or something, then came back and presented the world with Git?
Can someone please make Linus go on vacation again so he can write a decent init system?
It should only take him a few weeks... I'd rent one of those giant cement mixers (clean of course) to make dump trucks worth of popcorn to watch Linus go full rage on little bitch face beyond fucktarded Poettering.
I mean it, I'm not even drunk.
How did they obtain "essential business" status everywhere they operated? Unlike Walmart, they weren't selling fresh food.
Because I want my toys! And I want them now! Waaaaaah! And I want them delivered to my doorstep! (sniff!) And now I want food from Amazon. And I want Alexa to tell me what else I want from Amazon. It is essential. How could I live life without it? The sky would fall.
If you haven't ordered from amazon in few months, they delayed mailing of non essential items. For me it took a month to get what i ordered even though they were Prime items.
working for the bolsheviks, iow
They do sell fresh food along with other essential items like soap, disinfectant, toiletries, etc etc.
Amazon is a huge company, and this patch isn't some kind of nefarious action. What it does show is myopia: the patch is something that's good for their very limited use-case with cloud servers, but which would be terrible or dangerous in many other scenarios (like desktop, mobile, etc.). Luckily, that's why we have someone like Linus at the helm looking at the big picture before just blindly accepting patches from anyone. Corporate contributors like Amazon aren't thinking "how will my patch for server farms affect embedded systems?", they just see a problem and come up with a fix, and then try to push the fix into the upstream so they don't have to maintain a fork forever.
For Amazon's particular situation, the patch might make a lot of sense, and be the most straightforward to solution to the problem. But that doesn't mean it makes any sense for other types of systems, and the Linux kernel isn't a highly-specialized OS kernel, it's a general-purpose one so it has to make compromises so that it can work well on all types of systems.
-bezos-wat-the patch-yes?-it's myopic-and?-and retarded and source of headache for everybody-did the same shitty trick work for systemd?-uh... I see, I will ship it now-fingers crossedAnd they didn't live happily ever after because Torvalds
Amazon are shits and scammers - from experience, I don't trust anything from them. [...]
Amazon are shits and scammers - from experience, I don't trust anything from them. [...]
I bought some legal anime DVDs from them along time ago. They tipped off local law enforcement about my purchase, and my life was completely destroyed as a result. I lost my job, with no hope of future employment. The neighborhood I reside in became infested with cops, lying-in-wait for me to commit some criminal act. Word gets around, & now I'm regarded by people I've never even met as a pedo & a stalker. Presumed guilty until death. I had no criminal record then, & still don't. I'm not a registered sex offender, either. Watch what you buy from Amazon. It could cost you your life.
legal anime DVDs
Cool story Kurt [nypost.com]
I bought some legal anime DVDs
Not pirating fansubs like a well-adjusted weeb? You deserved the jmichaelhudson treatment.
So, consuming material is a crime while distributing it for profit isn't?Anyway you should have kept the DVD sealed until they were 18 year old, pal.
What's the problem again? People looking for things to get outraged about or something?
People still keep using Intel anyway.
The real solution is 'Fix the CPUs, and where you can't fully isolate the kernel/non-related userspace processes from each other, or isolate userspace applications from performance related information. Both of these techniques limit or eliminate the issues in queston, but cause massive performance regressions akin to running in-order processors again after 20-22 years.
Having said that, the real solution is simply to go back to simplified in-order x86 chips, get rid of all the 'code obfuscation' bullshit so run-time optimizations can be used, and then use profiling to replace hot functions where a better codepath is possible, not unlike java or mono already do. With an improved prefetcher and the decreased transitor complexity of an in-order chip, we can have more cores or higher clock rates, and with the right prefetching techniques the cache can be ready when the next line of code needs data without hitting the bus. Another benefit is that in-order code will reduce or eliminate difficult to reproduce errors since race conditions in either hardware or software will be fixed issue.
But no, performance at all costs must be kept!
Buy AMD, disable SMT, and pray to Linux Torvalds.
Maybe they should look at what IBM does with their POWER architecture. Hell you can reflash some models from big to little endian if you want.
But unlike the x86, the POWER(8+) architecture's CPI stacks provide more than enough granularity for the kernel's scheduler to compensate for cache flushes: https://openpowerfoundation.org/wp-content/uploads/2015/03/Sadasivam-Satish_OPFS2015_IBM_031615_final.pdf [openpowerfoundation.org]
Interesting ideas, but it looks like even POWER are vulnerable to some of the many attack vectors: https://www.ibm.com/blogs/psirt/potential-impact-processors-power-family/ [ibm.com]
Does anyone know, are RISC CPUs generally less vulnerable to attacks?
From only a little bit of research, it looks like Itanium are some of the least vulnerable CPUs out there: https://secure64.com/2018/01/09/not-vulnerable-intel-itanium-secure64-sourcet/ [secure64.com]
then use profiling to replace hot functions where a better codepath is possible
Congrats, you've just described one of the vulnerabilities in the CPU ;)
I remember a time once, in a different millennium, when there was an effort to have a reduced simpler set of machine instructions, and do optimizations in compilers?
But Intel's monopoly took over the world. And made everything worse.
Linux helps by having a huge software base that can (mostly) be recompiled for different architectures. Thus threatening the possibility that alternate ISAs could be used. Restoring Freedom and Competition to the galaxy.
Wasn't Intel's Itanium meant to be exactly this?
So fitting was nickname that to this day I read the actual name as a reference to the Titanic.
No, Itanium was an IP power grab on Intel's part. The licencing was such that Intel would own the IP of all compilers and their code for Itanium systems. Jumping architectures was a roadblock, but it was really the IP issues that spelled Itanium's doom when AMD offered an alternative with the x86-AMD64 instruction set. (And yes, that is the 64-bit x86 instruction set's name despite Intel's attempt to rename it.)
Except we did end up with RISC processors - with instruction decoders strapped to the front end.
I remember a time once, in a different millennium, when there was an effort to have a reduced simpler set of machine instructions, and do optimizations in compilers?But Intel's monopoly took over the world. And made everything worse.
You have a very distorted view of history. I worked at Intel during this time; that "reduced simpler set of machine instructions" with "optimizations in compilers" is EXACTLY what Intel was trying to push on everyone with the Itanium CPUs and their EPIC architecture. You know what happened? No one wanted these CPUs. Instead, people were screaming for 64-bit CPUs, and Intel told them, "we have those! They're called Itaniums! Just buy those!!" but people didn't want them (for good reasons: expensive, not compatible with x86 except through a crappy little add-on core that was really slow). They also said "if that's too expensive, just use EM64T!" (which was their shitty 64-bit extension on 32-bit x86 CPUs). Then, AMD came up with their 64-bit version of the x86 architecture, and Intel was *forced* to adopt the amd64 ISA.
If you want to blame someone, blame AMD. But AMD was giving customers what they wanted: something much like the existing x86-32 CPUs, but in 64-bit, and with some significant improvements (like a lot more registers). And Intel/HP's attempt at what you describe was a total disaster. Itanium never achieved what it promised; it turned out that trying to do everything in the compiler just isn't feasible.
It wasn't the RISC part of "relying on compiler optimizations" that made the Itanic sink, it was the EPIC part. You know, the part where Intel said "I'll see your reduced instruction set, and I'll raise you a VLIW", thereby turning the what-was-supposed-to-be-reduced-and-simple into a giant hellhole of NOPs because there's only so much parallellism you can extract from everyday, low-level code (which is what everyone is still writing, isn't it?).
Many compilers already had trouble filling MIPS' delayed-branch slot, what was Intel smoking when they decided "it should be a no-brainer for a compiler to always issue three instructions between an assignment to a variable and its use, or between a computation and writing the result of that computation to memory. Oh, and you can't have any branches in between"?
It didn't help that Intel so over-promised the Itanic, and then did some under the table dealing such that Alpha and HP's 64 bit RISC offerings died in anticipation.
Then, of course, when Itanic refused to come even close to it's performance promises (Even Intel couldn't come up with a compiler that could produce fast code for Itanic), they refused to admit even to themselves that it wasn't really worth ~$10K.
Given pricing on Itanic, AMD would have seen market demand for 64 bit extensions to x86 (at x86 prices) either way.
Under the table dealing? I'm not sure what you're talking about; Itanic was a joint venture between HP and Intel. At the time, HP had acquired the Alpha through their acquisition of DEC (or what was left of it), and of course they already had their own 64-bit RISC stuff (PA-RISC). I don't see why they needed any "under the table" dealings to decide they wanted to move to the great, new technology that was Itanium (which of course turned out not to be great at all). Any rational tech company is going to want to end-of-life older, obsolete technologies in favor of newer, better ones. There's a reason Intel doesn't bother with 32-bit CPUs any more, for instance, or the dreadful P4 with Netburst architecture. I don't see how there was anything nefarious here, just bone-headed: they made far too large a gamble that EPIC would work out technically, and it didn't.
Intel's problem was that they were extremely overconfident in their market position (much like IBM a decade earlier), thinking customers were going to stick with them no matter what they did. So they refused to work on good 64-bit extensions to x86, thinking they could just bully customers into buying Itanic, or making-do with EM64T. ("You don't really need anything more than this" was basically what they told customers. This is generally not a good way to run a business; it's the mindset of a monopolist.) Then when AMD came out with x86-64, they hung on for a little while trying to downplay it and convince the public that they didn't need it, but finally they capitulated when they were losing too much marketshare. It was extremely bad leadership at the time, I think largely attributable to Craig Barrett (the CEO). Interestingly, Barrett was an engineer before going into management; his successor, Paul Otellini, really improved things, and he had a finance background IIRC.
Anyway, the whole debacle reminded me of IBM back in the early 90s: they were losing the "clone wars" in desktop PCs, so they got the bright idea of coming out with the PS/2 computers. These were *much* more expensive than clones, and also very incompatible, with 2.88MB floppy drives (which were backwards compatible with 1.44MB and 720k disks), and the MCA bus architecture, which they had patented and expansion cards for which cost a fortune. They refused to join with the rest of industry and use the competing EISA standard (an upgrade to the older ISA which the XT and AT used). Somehow they thought that the market would simply abandon the clones and buy these much more expensive IBMs and accessories for them, which of course didn't happen. Around a half-decade later, IBM finally threw in the towel and sold their PC business off to Lenovo.
Hanlon's razor has a tendency to come down to a judgement call, but to me it looks an awful lot like chinese walls to obscure a simple pay the competition to leave the market play, which would surely have triggered anti-trust investigations without the indirection. Of course, when Itanic sank, any such investigation became moot anyway.
Honestly, Itanic's fate shouldn't have been that much of a surprise. After all, the 8086 was supposed to be an I/O co-processor for the much more baroque iAPX (not unlike channel I/O in an IBM) but the iAPX432 was such a lumbering Hippo that engineers noticed it was faster to offload the computation to the channel processors (8086).
Basically, IBM, Intel, and then Microsoft more or less stumbled into relevance.
but to me it looks an awful lot like chinese walls to obscure a simple pay the competition to leave the market play, which would surely have triggered anti-trust investigations
Again, I do not understand what you're getting at here. Intel did not pay off AMD in any way: AMD released the x86-64. And Intel wasn't in competition with HP: they teamed up and worked on Itanium jointly. HP engineers were working with Intel engineers on the project.
Intel wasn't in competition with HP, but if they didn't throw HP a bone, they would be when they got in to PA-RISC's market space. Or DEC's. You don't think it's a coincidence that 2 significant 64 bit CPUs just happened to exit the market when Intel wanted to introduce Itanic, do you?
AMD shouldered it's way in later when they saw that Intel was in a tarpit with 64bit for the high end server market, giving them a chance to grab up the high end PC/workstation market with the 64bit x86 compatible CPU everyone really wanted..
You don't think it's a coincidence that 2 significant 64 bit CPUs just happened to exit the market when Intel wanted to introduce Itanic, do you?
You need to go re-read your history. 2 significant 64-bit CPUs exited the market because the companies that owned them wanted to replace them with a newer, better one. It's as simple as that. You mention DEC. There was no DEC at this time. DEC died (probably due to a combination of mismanagement, other big-iron competitors, and the big-iron market drying up due to x86 servers), and pieces of it were bought up by both HP and Intel. HP and Intel decided to partner and work on Itanium. This wasn't some kind of conspiracy, it's what businesses do sometimes. It's no different than Chrysler and Mitsubishi forming Diamond Star Motors and building cars together. There's no law preventing businesses from forming partnerships.
No, not really: Intel was trying to push the server market to Itanium and not getting much success because it didn't perform as well as promised and it cost a fortune. Intel didn't want there to be any cheap 64-bit CPUs on the market, and wanted the high profits that Itanium promised, and tried to act like a monopolist and force customers to use Itanium. HP partnered with them on this scheme. This is why they killed both PA-RISC and Alpha instead of continuing development of them. But Intel was arrogant and stupid, and somehow didn't think AMD would simply make their own 64-bit x86, so when they did, they were caught off-guard. They tried to downplay it for a while and convince customers they didn't really need it and that Itanic was better, but of course that didn't go very far. Basically, Intel showed a lot of hubris at the time, and it bit them in the ass.
Your last paragraph actually summarized my position. HP and Intel did their best to kill competing processors so they could drive the price for 64 bit CPUS higher, but Itanic wasn't up to the task. AMD saw an opportunity to undercut them with a CPU that would definitely perform at least as well on existing 32 bit code as the current line of processors AND pave the way to 64 bit on the desktop.
The HP-Intel partnership was a typical corporate business deal forged in Hell. Intel believed they could design and fab the processor, but needed HP to kill off competing CPUs. Both it's own PA-RISC and to buy up and shelf Alpha. The "technical collaboration" was mostly paper to wipe the smell of sulfur off of the deal.
To this day, Intel is still gagging on the bitter pill of having to license x86-AMD64 from AMD.
HP and Intel did their best to kill competing processors so they could drive the price for 64 bit CPUS higher, but Itanic wasn't up to the task. AMD saw an opportunity to undercut them with a CPU that would definitely perform at least as well on existing 32 bit code as the current line of processors AND pave the way to 64 bit on the desktop.
Yes, I'd say this is accurate. But it wasn't some kind of secretive collusion or anything like that; DEC folded, the pieces went to HP and Intel, and they worked together (very much in the open) to develop and push Itanic as the new chip for big servers. Intel completely ignored customer calls for a 64-bit processor as computing needs were growing too large for the 4GB memory limit of the time, and tried to push PAE as a good-enough workaround. AMD saw the market demand and made a chip that the market wanted.
All the cloud folks can go back to using Pentium MMX for their glorious VMs. Single-user machines that run the same code everyday can use the leaky, melty, high-performance CPUs.
At least that's my take on his commentary per Phoronix [phoronix.com]. He still really doesn't understand why using intentionally incendiary language is not the smart thing to do.
he kept all his inflammatory remarks for the option/effect itself. i don't see the big problem.
Why not? If you've got a problem with someone saying it how it is, perhaps the problem is with you? To be really rude you need to employ the dual daggers of insincere politeness and sophistry neither of which have any place in engineering. And tell us, if being an arrogant ass is a problem why do corporatist CoC suckers expect everyone to walk on eggshells around such asinine narcissism as enforced "gender pronouns"?
Actually, he communicated very effectively. The submitter knows it is certainly not accepted, exactly why it is not accepted (so no wasting time addressing the wrong issue) and that kinder, gentler implementations of the same approach will not cut it.