It seems Intel has had some second thoughts about Spectre 2 microcode fixes:
Intel has issued new a new "microcode revision guidance" that confesses it won't address the Meltdown and Spectre design flaws in all of its vulnerable processors – in some cases because it's too tricky to remove the Spectre v2 class of vulnerabilities.
The new guidance (pdf), issued April 2, adds a "stopped" status to Intel's "production status" category in its array of available Meltdown and Spectre security updates. "Stopped" indicates there will be no microcode patch to kill off Meltdown and Spectre.
The guidance explains that a chipset earns "stopped" status because, "after a comprehensive investigation of the microarchitectures and microcode capabilities for these products, Intel has determined to not release microcode updates for these products for one or more reasons."
Those reasons are given as:
- Micro-architectural characteristics that preclude a practical implementation of features mitigating [Spectre] Variant 2 (CVE-2017-5715)
- Limited Commercially Available System Software support
- Based on customer inputs, most of these products are implemented as "closed systems" and therefore are expected to have a lower likelihood of exposure to these vulnerabilities.
Thus, if a chip family falls under one of those categories – such as Intel can't easily fix Spectre v2 in the design, or customers don't think the hardware will be exploited – it gets a "stopped" sticker. To leverage the vulnerabilities, malware needs to be running on a system, so if the computer is totally closed off from the outside world, administrators may feel it's not worth the hassle applying messy microcode, operating system, or application updates.
"Stopped" CPUs that won't therefore get a fix are in the Bloomfield, Bloomfield Xeon, Clarksfield, Gulftown, Harpertown Xeon C0 and E0, Jasper Forest, Penryn/QC, SoFIA 3GR, Wolfdale, Wolfdale Xeon, Yorkfield, and Yorkfield Xeon families. The list includes various Xeons, Core CPUs, Pentiums, Celerons, and Atoms – just about everything Intel makes.
Most [of] the CPUs listed above are oldies that went on sale between 2007 and 2011, so it is likely few remain in normal use.
(Score: 4, Interesting) by Subsentient on Wednesday April 04 2018, @10:08AM (13 children)
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
(Score: 2) by cockroach on Wednesday April 04 2018, @10:28AM
Same here, my main desktop and one of my laptops use these CPUs. While the laptop is not currently in use (it's a Thinkpad X200S waiting for the Libreboot treatment), the desktop machine certainly is.
(Score: 0) by Anonymous Coward on Wednesday April 04 2018, @10:45AM
Now you get to pick of one "can't fix" and of two "won't fix" reasons for being screwed. Oh the amount of choises!
(Score: 5, Insightful) by bobthecimmerian on Wednesday April 04 2018, @11:00AM (7 children)
In 2008 the thought of using a CPU from 1997-2001 for anything other than nostalgia's sake was absurd. But the rate of performance improvements from 2007 to now are entirely different - you could probably get another five years out of that Core 2 Quad as your main desktop CPU. I have a desktop collecting dust in my house with an AMD dual core from 2006 and 4GB of RAM, and if I put an SSD into it you could use it for web surfing and document editing without much hassle.
I'm a big believer in 'reduce, reuse, recycle' and I'm saddened at the thought of the massive drop in value of perfectly serviceable used CPU parts over this.
(Score: 3, Informative) by Subsentient on Wednesday April 04 2018, @11:16AM (3 children)
My main rig is a heavily mutilated 2008 Dell Optiplex 755 with 8GB of DDR2 RAM, a 2TB SATA mechanical drive, a standard 400W ATX PSU that doesn't even fit in the slimline case and hangs out the back, the CPU upgraded from the original core 2 duo E6550, and a cheap 2011 radeon GPU for the light gaming I do. I also added a PCI (not PCIe) USB 2.0 controller and another front mounted 5 port hub, giving me a total of 10 extra USB ports, which I use a lot.
This unholy monstrosity manages to absolutely demolish any 2018 cheapo Walmart PC, and even stomps my 1st gen Core i5 thinkpad with ease, a more expensive, business class machine that's 3 years newer. I think that's a pretty good set of proof that Moore's law is dead and rotting.
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
(Score: 2) by Dr Spin on Wednesday April 04 2018, @12:28PM (1 child)
I also have an Optiplex 755 as a "hot standby" office machine. Notably slower than my own workstation - which has an SSD - at CAD, But, for office work,
(Browser, LibreOffice) its barely different.
My family owns about 6 Thinkpad T61's. Almost all are running Ubuntu Mate - the rest are XP or WIn7 as dual boot where some elderly piece of software
insists (eg for driving an embroidery machine). Although T61's don't have actual serial ports, you can get a dock which has one.
My own, a T61p, has an extra H/D where the CD drive can go - to run *BSD.
Can't see that new machines would be a big improvement. Had a look in PC World in December, and the specs were similar to what we have (they
will be old enough to attend secondary school in September ;-)
Our T61's all have variations on the Merom theme. (T7xxx) Are these safe, or unfixable? It would be interesting to know.
Warning: Opening your mouth may invalidate your brain!
(Score: 3, Interesting) by hamsterdan on Wednesday April 04 2018, @09:22PM
My HTPC is running a HP (Asus) workstation board with 8GB DDR2, a Q6600 Core 2 Quad, a Radeon 7750 with a 120GB OCZ Agility2 SSD for OS, a 1TB WD drive. It records TV with a QuadHD from Hauppauge (ATSC). Sound is handled by a Soundblaster Xfi with DTS encoding to my AVR via optical. Onboard sound is routed to my vintage stereo gear for my Airport emulator. Not as fast as my i7 tower, but gets the job done.
Playing with OSX on an old Optiplex 745 too, no need for a more recent machine
(Score: 3, Insightful) by bobthecimmerian on Wednesday April 04 2018, @06:37PM
The speed improvement from running an SSD as your primary drive is tremendous, so if you can budget that it's worthwhile. Otherwise, nice rig. I go big on power supplies, because they're about the only things that reliably cause serious problems when I buy the cheaper ones. My video card is a mid range 2010 Radeon GPU, HD 5770 that was maybe $180 new :)
(Score: 3, Interesting) by LoRdTAW on Wednesday April 04 2018, @11:58AM
I'm running an i7 2600k and have ZERO plans to upgrade because I honestly have no reason. Everything I ask for, the machine gives me.
(Score: 2) by SomeGuy on Thursday April 05 2018, @04:42AM (1 child)
It used to be that Intel and AMD would have to wait for Microsoft to break things and slow things down before users would consider "upgrading". Well, now they have got their hardware aboard the security upgrade train.
This is what we will be looking forward to in the future. Throw out old hardware because after five minutes the hardware itself is "insecure".
(Score: 2) by bobthecimmerian on Friday April 06 2018, @02:04PM
When I was in college and just after college, I thought Microsoft made Windows slow down on purpose. Now that I have a lot more experience with software, I'm 99% sure it went this way: Microsoft engineers wrote the software, and it started to slow down due to registry bloat and NTFS inefficiency when dealing with lots of files. That was an accident. Their disinterest in fixing it once they understood the problem, that was on purpose.
(Score: 0) by Anonymous Coward on Wednesday April 04 2018, @12:12PM (1 child)
Use the nopti kernel cmdline boot option to forcefully disable the software fixes and get performance back. Of course with the understanding of what the risks are of doing so. Check, analyse, and understand if spectre/meltdown really are such a big deal to your particular use-case and computer or not, weigh up the options, and then patch or don't patch accordingly. But don't just blindly turn the fixes off. (Or on either.)
e.g. A few options to think about:
- You on a single user physical machine just used by yourself? Perhaps no big deal then leaving the vulnerabilities unpatched, you have other security concerns if you frequently run less-then-trustworthy code, or code which is likely to try to target these exploits.
- You need to run such potentially exploitable code? Run it in a VM which has the software fixes, and don't worry so much about the hypervisor. Or run it on it's own physical hardware if you have such resources.
- You running a mix of more trusted and less trusted services, or services which have different attack surfaces, on the same machine? Seperate and virtualise them, then treat them for this flaw individually.
- You already use virtualisation and mix public- and non-public-facing systems on the same hardware, and worried about an independent bug allowing an attacker first compromising the public VM, then using this exploit to jump across VMs from the public to non-public? Fine, patch the hypervisor, patch the less-trusted VM, but leave the trusted VM unpatched so at least that one doesn't get a double-whammy performance hit.
- Or heck, why not patch the hypervisor only, and leave all VMs unpatched? If a public VM is compromised by different means, and you've segregated services enough, why does it matter as much if the exploit can be used within the VM, as long as the hypervisor still prevents it reaching out to other more trusted VMs?
Other than for cloud services and organisations which consolidate what should be independent systems with their own trust boundaries onto the same physical hardware, I think this issue is way overblown for the average single-user computing device. These hardware bugs basically break down trust boundaries between systems or applications where we expect them to be enforced. So if we approach the issue from this trust viewpoint rather than the typical "we must fix all bugs and apply all patches at any cost!", then we can take a more sensible approach to how they are handled, or if they even need to be handled in individual scenarios. Sure, cloud vendors etc make up a very large chunk of affected use cases these days, so the effort being put into mitigation is understandable. But it doesn't mean those fixes are necessarily appropriate in absolutely all cases. Security comes in layers, and I feel the layer provided by this fix is overkill in many cases when there are other layers in place.
At least linux gives us users the option whether we want the fixes enabled or not, the same cannot be as easily said for the vendors of the more closed software systems.
Also, please someone correct me if I'm wrong and am spouting nonsense in how to approach this vulnerability, or in how I appear to understand how it manifests itself and the impacts of it.
(Score: 2) by Subsentient on Wednesday April 04 2018, @01:20PM
"It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
(Score: 2) by requerdanos on Wednesday April 04 2018, @07:36PM
A random survey of my digital domain (the seven computers at my desk) shows four affected PCs, according to /proc/cpuinfo under Linux 4.15 or 4.16:
model name : Intel(R) Core(TM)2 Duo CPU E7300 @ 2.66GHz
bugs : cpu_meltdown spectre_v1 spectre_v2
model name : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
bugs : cpu_meltdown spectre_v1 spectre_v2
model name : AMD Phenom(tm) II P820 Triple-Core Processor
bugs : tlb_mmatch apic_c1e fxsave_leak sysret_ss_attrs null_seg amd_e400 spectre_v1 spectre_v2
model name : Intel(R) Core(TM) i5 CPU M 540 @ 2.53GHz
bugs : cpu_insecure
(The other three PCs are ARM single-board affairs (a NanoPC T3 and two Olinuxino Lime2s).)
None of these is likely to be patched to remove vulnerabilities, yet none are scheduled for replacement anytime soon. Heck, that Xeon E5 above was just put into service yesterday, replacing a newer-but-slower Ryzen R7 1700X.
I remember the good old days when "bugs" said F00F and FDIV.