Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday April 04 2018, @08:46AM   Printer-friendly
from the defect-closed-will-not-fix dept.

It seems Intel has had some second thoughts about Spectre 2 microcode fixes:

Intel has issued new a new "microcode revision guidance" that confesses it won't address the Meltdown and Spectre design flaws in all of its vulnerable processors – in some cases because it's too tricky to remove the Spectre v2 class of vulnerabilities.

The new guidance (pdf), issued April 2, adds a "stopped" status to Intel's "production status" category in its array of available Meltdown and Spectre security updates. "Stopped" indicates there will be no microcode patch to kill off Meltdown and Spectre.

The guidance explains that a chipset earns "stopped" status because, "after a comprehensive investigation of the microarchitectures and microcode capabilities for these products, Intel has determined to not release microcode updates for these products for one or more reasons."

Those reasons are given as:

  • Micro-architectural characteristics that preclude a practical implementation of features mitigating [Spectre] Variant 2 (CVE-2017-5715)
  • Limited Commercially Available System Software support
  • Based on customer inputs, most of these products are implemented as "closed systems" and therefore are expected to have a lower likelihood of exposure to these vulnerabilities.

Thus, if a chip family falls under one of those categories – such as Intel can't easily fix Spectre v2 in the design, or customers don't think the hardware will be exploited – it gets a "stopped" sticker. To leverage the vulnerabilities, malware needs to be running on a system, so if the computer is totally closed off from the outside world, administrators may feel it's not worth the hassle applying messy microcode, operating system, or application updates.

"Stopped" CPUs that won't therefore get a fix are in the Bloomfield, Bloomfield Xeon, Clarksfield, Gulftown, Harpertown Xeon C0 and E0, Jasper Forest, Penryn/QC, SoFIA 3GR, Wolfdale, Wolfdale Xeon, Yorkfield, and Yorkfield Xeon families. The list includes various Xeons, Core CPUs, Pentiums, Celerons, and Atoms – just about everything Intel makes.

Most [of] the CPUs listed above are oldies that went on sale between 2007 and 2011, so it is likely few remain in normal use.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 1, Disagree) by anubi on Wednesday April 04 2018, @09:56AM (2 children)

    by anubi (2828) on Wednesday April 04 2018, @09:56AM (#662414) Journal

    One thing I would love to know... are these "vulnerabilities" opened up as a remedy for other people wanting to snoop machines in order to snoop or verify copyright?

    Is this the price I have to pay as Intel "works with" the RIAA and NSA to play whack-a-mole with song "thieves"?

    --
    "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
    • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @11:25AM

      by Anonymous Coward on Wednesday April 04 2018, @11:25AM (#662430)

      No, they are too unreliable for that.

    • (Score: 1, Insightful) by Anonymous Coward on Wednesday April 04 2018, @05:43PM

      by Anonymous Coward on Wednesday April 04 2018, @05:43PM (#662565)

      One thing I would love to know... are these "vulnerabilities" opened up as a remedy for other people wanting to snoop machines in order to snoop or verify copyright?

      Why would they do that when your machine already has the Intrusion Intel Management Engine installed?

  • (Score: 4, Interesting) by Subsentient on Wednesday April 04 2018, @10:08AM (13 children)

    by Subsentient (1111) on Wednesday April 04 2018, @10:08AM (#662415) Homepage Journal
    I use a number of Yorkfield CPUs, including a Core 2 Quad Q9500 in my main desktop. I've been hit hard by the cost of the Linux kernel mitigations for Meltdown and the retpoline compiler option for variant 2 of spectre. Guess there's no hope of reclaiming some of that performance.
    --
    "It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
    • (Score: 2) by cockroach on Wednesday April 04 2018, @10:28AM

      by cockroach (2266) on Wednesday April 04 2018, @10:28AM (#662419)

      Same here, my main desktop and one of my laptops use these CPUs. While the laptop is not currently in use (it's a Thinkpad X200S waiting for the Libreboot treatment), the desktop machine certainly is.

    • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @10:45AM

      by Anonymous Coward on Wednesday April 04 2018, @10:45AM (#662423)

      Now you get to pick of one "can't fix" and of two "won't fix" reasons for being screwed. Oh the amount of choises!

    • (Score: 5, Insightful) by bobthecimmerian on Wednesday April 04 2018, @11:00AM (7 children)

      by bobthecimmerian (6834) on Wednesday April 04 2018, @11:00AM (#662427)

      In 2008 the thought of using a CPU from 1997-2001 for anything other than nostalgia's sake was absurd. But the rate of performance improvements from 2007 to now are entirely different - you could probably get another five years out of that Core 2 Quad as your main desktop CPU. I have a desktop collecting dust in my house with an AMD dual core from 2006 and 4GB of RAM, and if I put an SSD into it you could use it for web surfing and document editing without much hassle.

      I'm a big believer in 'reduce, reuse, recycle' and I'm saddened at the thought of the massive drop in value of perfectly serviceable used CPU parts over this.

      • (Score: 3, Informative) by Subsentient on Wednesday April 04 2018, @11:16AM (3 children)

        by Subsentient (1111) on Wednesday April 04 2018, @11:16AM (#662428) Homepage Journal
        I agree, CPUs have pretty much plateaued in performance, except with incremental, small improvements. The Core 2 Quad can actually hold weight surprisingly well against much newer CPUs, even today.

        My main rig is a heavily mutilated 2008 Dell Optiplex 755 with 8GB of DDR2 RAM, a 2TB SATA mechanical drive, a standard 400W ATX PSU that doesn't even fit in the slimline case and hangs out the back, the CPU upgraded from the original core 2 duo E6550, and a cheap 2011 radeon GPU for the light gaming I do. I also added a PCI (not PCIe) USB 2.0 controller and another front mounted 5 port hub, giving me a total of 10 extra USB ports, which I use a lot.

        This unholy monstrosity manages to absolutely demolish any 2018 cheapo Walmart PC, and even stomps my 1st gen Core i5 thinkpad with ease, a more expensive, business class machine that's 3 years newer. I think that's a pretty good set of proof that Moore's law is dead and rotting.

        --
        "It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
        • (Score: 2) by Dr Spin on Wednesday April 04 2018, @12:28PM (1 child)

          by Dr Spin (5239) on Wednesday April 04 2018, @12:28PM (#662446)

          I also have an Optiplex 755 as a "hot standby" office machine. Notably slower than my own workstation - which has an SSD - at CAD, But, for office work,
          (Browser, LibreOffice) its barely different.

          My family owns about 6 Thinkpad T61's. Almost all are running Ubuntu Mate - the rest are XP or WIn7 as dual boot where some elderly piece of software
          insists (eg for driving an embroidery machine). Although T61's don't have actual serial ports, you can get a dock which has one.
          My own, a T61p, has an extra H/D where the CD drive can go - to run *BSD.

          Can't see that new machines would be a big improvement. Had a look in PC World in December, and the specs were similar to what we have (they
          will be old enough to attend secondary school in September ;-)

          Our T61's all have variations on the Merom theme. (T7xxx) Are these safe, or unfixable? It would be interesting to know.

          --
          Warning: Opening your mouth may invalidate your brain!
          • (Score: 3, Interesting) by hamsterdan on Wednesday April 04 2018, @09:22PM

            by hamsterdan (2829) on Wednesday April 04 2018, @09:22PM (#662640)

            My HTPC is running a HP (Asus) workstation board with 8GB DDR2, a Q6600 Core 2 Quad, a Radeon 7750 with a 120GB OCZ Agility2 SSD for OS, a 1TB WD drive. It records TV with a QuadHD from Hauppauge (ATSC). Sound is handled by a Soundblaster Xfi with DTS encoding to my AVR via optical. Onboard sound is routed to my vintage stereo gear for my Airport emulator. Not as fast as my i7 tower, but gets the job done.

            Playing with OSX on an old Optiplex 745 too, no need for a more recent machine

        • (Score: 3, Insightful) by bobthecimmerian on Wednesday April 04 2018, @06:37PM

          by bobthecimmerian (6834) on Wednesday April 04 2018, @06:37PM (#662577)

          The speed improvement from running an SSD as your primary drive is tremendous, so if you can budget that it's worthwhile. Otherwise, nice rig. I go big on power supplies, because they're about the only things that reliably cause serious problems when I buy the cheaper ones. My video card is a mid range 2010 Radeon GPU, HD 5770 that was maybe $180 new :)

      • (Score: 3, Interesting) by LoRdTAW on Wednesday April 04 2018, @11:58AM

        by LoRdTAW (3755) on Wednesday April 04 2018, @11:58AM (#662438) Journal

        I'm running an i7 2600k and have ZERO plans to upgrade because I honestly have no reason. Everything I ask for, the machine gives me.

      • (Score: 2) by SomeGuy on Thursday April 05 2018, @04:42AM (1 child)

        by SomeGuy (5632) on Thursday April 05 2018, @04:42AM (#662778)

        It used to be that Intel and AMD would have to wait for Microsoft to break things and slow things down before users would consider "upgrading". Well, now they have got their hardware aboard the security upgrade train.

        This is what we will be looking forward to in the future. Throw out old hardware because after five minutes the hardware itself is "insecure".

        • (Score: 2) by bobthecimmerian on Friday April 06 2018, @02:04PM

          by bobthecimmerian (6834) on Friday April 06 2018, @02:04PM (#663422)

          When I was in college and just after college, I thought Microsoft made Windows slow down on purpose. Now that I have a lot more experience with software, I'm 99% sure it went this way: Microsoft engineers wrote the software, and it started to slow down due to registry bloat and NTFS inefficiency when dealing with lots of files. That was an accident. Their disinterest in fixing it once they understood the problem, that was on purpose.

    • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @12:12PM (1 child)

      by Anonymous Coward on Wednesday April 04 2018, @12:12PM (#662443)

      Guess there's no hope of reclaiming some of that performance.

      Use the nopti kernel cmdline boot option to forcefully disable the software fixes and get performance back. Of course with the understanding of what the risks are of doing so. Check, analyse, and understand if spectre/meltdown really are such a big deal to your particular use-case and computer or not, weigh up the options, and then patch or don't patch accordingly. But don't just blindly turn the fixes off. (Or on either.)


      e.g. A few options to think about:

      - You on a single user physical machine just used by yourself? Perhaps no big deal then leaving the vulnerabilities unpatched, you have other security concerns if you frequently run less-then-trustworthy code, or code which is likely to try to target these exploits.

      - You need to run such potentially exploitable code? Run it in a VM which has the software fixes, and don't worry so much about the hypervisor. Or run it on it's own physical hardware if you have such resources.

      - You running a mix of more trusted and less trusted services, or services which have different attack surfaces, on the same machine? Seperate and virtualise them, then treat them for this flaw individually.

      - You already use virtualisation and mix public- and non-public-facing systems on the same hardware, and worried about an independent bug allowing an attacker first compromising the public VM, then using this exploit to jump across VMs from the public to non-public? Fine, patch the hypervisor, patch the less-trusted VM, but leave the trusted VM unpatched so at least that one doesn't get a double-whammy performance hit.

      - Or heck, why not patch the hypervisor only, and leave all VMs unpatched? If a public VM is compromised by different means, and you've segregated services enough, why does it matter as much if the exploit can be used within the VM, as long as the hypervisor still prevents it reaching out to other more trusted VMs?



      Other than for cloud services and organisations which consolidate what should be independent systems with their own trust boundaries onto the same physical hardware, I think this issue is way overblown for the average single-user computing device. These hardware bugs basically break down trust boundaries between systems or applications where we expect them to be enforced. So if we approach the issue from this trust viewpoint rather than the typical "we must fix all bugs and apply all patches at any cost!", then we can take a more sensible approach to how they are handled, or if they even need to be handled in individual scenarios. Sure, cloud vendors etc make up a very large chunk of affected use cases these days, so the effort being put into mitigation is understandable. But it doesn't mean those fixes are necessarily appropriate in absolutely all cases. Security comes in layers, and I feel the layer provided by this fix is overkill in many cases when there are other layers in place.

      At least linux gives us users the option whether we want the fixes enabled or not, the same cannot be as easily said for the vendors of the more closed software systems.


      Also, please someone correct me if I'm wrong and am spouting nonsense in how to approach this vulnerability, or in how I appear to understand how it manifests itself and the impacts of it.

      • (Score: 2) by Subsentient on Wednesday April 04 2018, @01:20PM

        by Subsentient (1111) on Wednesday April 04 2018, @01:20PM (#662461) Homepage Journal
        Yeah, I know about that option but I'm not gonna do that. I'm of the security paranoia level where while I disable SELinux, I keep my firewall rather locked down and constantly check for security updates. I'd never be able to rest easy if I disabled PTI. My next system will likely be an AMD Ryzen of some sort, so while I might still be stuck with spectre, at least I won't have to deal with PTI.
        --
        "It is no measure of health to be well adjusted to a profoundly sick society." -Jiddu Krishnamurti
    • (Score: 2) by requerdanos on Wednesday April 04 2018, @07:36PM

      by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @07:36PM (#662601) Journal

      A random survey of my digital domain (the seven computers at my desk) shows four affected PCs, according to /proc/cpuinfo under Linux 4.15 or 4.16:

      model name : Intel(R) Core(TM)2 Duo CPU E7300 @ 2.66GHz
      bugs : cpu_meltdown spectre_v1 spectre_v2

      model name : Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz
      bugs : cpu_meltdown spectre_v1 spectre_v2

      model name : AMD Phenom(tm) II P820 Triple-Core Processor
      bugs : tlb_mmatch apic_c1e fxsave_leak sysret_ss_attrs null_seg amd_e400 spectre_v1 spectre_v2

      model name : Intel(R) Core(TM) i5 CPU M 540 @ 2.53GHz
      bugs : cpu_insecure

      (The other three PCs are ARM single-board affairs (a NanoPC T3 and two Olinuxino Lime2s).)

      None of these is likely to be patched to remove vulnerabilities, yet none are scheduled for replacement anytime soon. Heck, that Xeon E5 above was just put into service yesterday, replacing a newer-but-slower Ryzen R7 1700X.

      I remember the good old days when "bugs" said F00F and FDIV.

  • (Score: 3, Interesting) by looorg on Wednesday April 04 2018, @10:52AM (6 children)

    by looorg (578) on Wednesday April 04 2018, @10:52AM (#662425)

    So are these flaws present in their current and new CPU:s or have they managed to eradicate them from those or is it some inherent flaw in the structure that they just can't get rid of without a complete redesign?

    • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @11:51AM (1 child)

      by Anonymous Coward on Wednesday April 04 2018, @11:51AM (#662434)

      Time to migrate to in-order ARM chips and call it a day. Intel screwed the pooch big time on this one and it is time to make the plunge and lose a bit of performance for guaranteed security. It is what I am doing.

      Sadly PPC G5 chips have the same issue, and while the odds of attacks against them are low, it means that Old Mac G5 systems are just as risky to use online or with a web browser (since even if the javascript engine has mitigations, as soon as they trip an exploit to executable code they can use spectre either way.)

      If you want an easy way to mitigate these issues, all you really need to do is change the ondemand scheduler in linux to under full load vary the cpu clock between steppings, which will throw off timing for the spectre attacks. You will get a stuttery experience if you are running something that truly demands the higher clock rate to perform smoothly, but for everything else it will just act as a spectre mitigation.

      • (Score: 2) by requerdanos on Wednesday April 04 2018, @10:14PM

        by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @10:14PM (#662661) Journal

        If you want an easy way to mitigate these issues, all you really need to do is change the ondemand scheduler in linux

        The intel_pstate driver [stackexchange.com] only offers two governors: performance and powersave. Neither does what you might expect, as frequencies vary by load with both of them.

        Disabling intel_pstate and using ACPI brings back the ondemand governor, but with reduced performance. From the link above:

        The intel_pstate driver knows the details of the how the CPU works and it does a better job than the generic ACPI solution. intel_pstate offers only two governors, powersave and performance. Intel claims that the intel_pstate "powersave" is faster than the generic acpi governor with "performance"

        I can confirm that the intel_pstate powersave governor, somewhat counter-intuitively, keeps my Xeon E5v2 chip running about 2.8GHz, its rated max speed. (The performance governor seems to run it at about 3.1 - 3.2 GHz.)

        Point being, I guess, that even if that's a good mitigation technique, it comes with its own toils and troubles.

        (And I would rather have a new governor, say "psycho" or something, modeled on the ondemand governor but with the unpredictability, rather than messing with ondemand itself, anyway.)

    • (Score: 4, Informative) by bzipitidoo on Wednesday April 04 2018, @01:21PM (2 children)

      by bzipitidoo (4388) on Wednesday April 04 2018, @01:21PM (#662462) Journal

      It's an inherent flaw with the dirty way speculative execution was implemented, which goes all the way back to the Pentium Pro in the mid '90s, and also affects AMD, ARM-- any company that employed speculative execution in their architecture and couldn't resist skipping a few checks to save time, which I think was just about all of them. If you want an Intel chip that is unaffected by Spectre, have to go all the way back to the Pentium. One thing the first Pentiums became infamous for was that division bug, but it shouldn't be too hard to avoid those. Sometimes, manufacturers buy older architectures, and chip makers helpfully produce them on the smaller processes available today. For instance, in recent times you could get a 100 MHz 486. I have no idea whether there are any new Pentiums available in, say, a 32nm process, and which can run at 1GHz, but there could be such a thing.

      I can tell you that a 133 MHz Pentium MMX system with 96M of RAM (the max that system can support) is excruciatingly, painfully slow with modern software. The last time I fooled with that platform, I tried to run Firefox 3.5 on it. It worked, but it took 30 seconds just to start up with a blank page. On that machine, Stellarium took 5 minutes to start, and was so slow it was unusable. 96M just isn't enough RAM any more, need at least 512M.

      Intel isn't planning to fix this problem immediately. May have to wait for the Tigerlake arch, perhaps in 2020. AMD chips are subject to less of the vulnerabilities, but aren't immune to all of them. Don't know what AMD is doing about it, whether they will have it all fixed before Intel, but it seems likely they will. There just isn't a good option right now. Wait a year or two, or use an ancient Pentium system, or live with it.

      • (Score: 0) by Anonymous Coward on Thursday April 05 2018, @08:42AM (1 child)

        by Anonymous Coward on Thursday April 05 2018, @08:42AM (#662830)

        Sounds like a good time to not buy new computers.

        • (Score: 2) by bzipitidoo on Thursday April 05 2018, @12:37PM

          by bzipitidoo (4388) on Thursday April 05 2018, @12:37PM (#662890) Journal

          Tell me about it. I had been waiting to upgrade my 2 old circa 2007 desktop systems. Decided to stop waiting last summer. Also, the power supply on one of them was failing and making the system unreliable, prone to spontaneous reboots and freezes. I wanted the Ryzen, but it would have been another half a year wait at least, so I went ahead and got a Skylake and then a Kaby Lake. The shine still hadn't worn off when the news about Meltdown and Spectre hit.

          So then I checked that the old desktop's problem really was a bad power supply, bought and installed a new one, and it's working fine again.

    • (Score: 1, Informative) by Anonymous Coward on Wednesday April 04 2018, @06:40PM

      by Anonymous Coward on Wednesday April 04 2018, @06:40PM (#662580)

      Spectre is common to all processors that do speculative execution. Meltdown was Intel-specific, as Intel was the only one that performed speculation across different privilege levels (so user space could peek at kernel data).

  • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @10:59AM (1 child)

    by Anonymous Coward on Wednesday April 04 2018, @10:59AM (#662426)

    The way these vulnerabilities come rolling that way right out of the factory, I can't help but wonder if these are present so Microsoft can kill off existing machinery at any time with a "security update" in the same manner as they were the carrier of the FTDI chip nuker.

    If not Microsoft, anyone who has the "secret sauce" to deliver a "security update" to any company he may wish to take down.

    This thing about software being even capable of damaging hardware is for the birds... ( unless electromechanical. where physics, not common sense, rules. )

    • (Score: 3, Interesting) by Anonymous Coward on Wednesday April 04 2018, @11:56AM

      by Anonymous Coward on Wednesday April 04 2018, @11:56AM (#662437)

      That these exploits fit perfectly well with that narrative. Since it affects every system since the original hyperthreaded intel chips (which had a large number of warnings of exactly these types of cache sidechannel attacks back in 2004 or 2005!!! Go look through the slashdot archives for P4 Hyperthreading Sidechannel attack, or a variation on the spelling.) Intel did some trivial patch and claimed it wasn't a major threat vector, but for a while hyperthreading defaulted to off, not for performance reasons, but for security related ones.

      By forcing obsolescence of these old chips they can push us on to new more restricted platforms, helping both government and corporate control of the population as they slowly tighten the noose around our necks. You can scoff all you like, but as the Facebook data leak shows, neither the government nor the corporations are doing their civic duty in protecting the rights or information of their constituents or consumers respectively.

  • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @12:06PM (1 child)

    by Anonymous Coward on Wednesday April 04 2018, @12:06PM (#662441)

    Well this sure is one way to *encourage* others to buy a new Windows10 machine.

    • (Score: 0) by Anonymous Coward on Thursday April 05 2018, @12:34AM

      by Anonymous Coward on Thursday April 05 2018, @12:34AM (#662707)

      That is not a bad idea. Most of that series of CPU line and computers were fairly bad in power usage. You are talking the end of XP and VISTA style computers here. The OS on them is probably wildly out of date already except for anyone who converted them to linux.

  • (Score: 3, Interesting) by bradley13 on Wednesday April 04 2018, @01:39PM (6 children)

    by bradley13 (3053) on Wednesday April 04 2018, @01:39PM (#662469) Homepage Journal

    Moore's law actually says nothing at all about performance - it is about transistor density. For years, smaller transistors automatically meant faster performance. But CPUs have reached their maximum complexity, and there's really no use for more transistors. Transistor density is still going up nicely - but it's now being invested in things like putting multiple cores on the chip, or a GPU, or doing full SoC. None of these have anything to say about individual core performance.

    In an attempt to keep adding performance, Intel (and other CPU manufacturers) have been adding tricks like longer instruction pipelines and speculative execution. Which, on average, maybe do improve performance - but it turns out that they were cheating. Doing things that cannot be allowed, at least, not if we care about security.

    Future advances in computing hardware will likely be about things other than raw CPU speed. For example, currently, adding more than about 16 cores to a machine actually slows things down, because we don't know how to use them for general computing, but the operating system has to manage them anyway. If we can make better use of parallelism, we could use simpler cores and put thousands of them on a chip.. Or quantum computing, if it lives up to the dreams, may offer an entirely new computational paradigm.

    Meanwhile, we've reached the point where there's almost no need to replace a computer, unless it actually breaks. So we're seeing the marketing engines of all the hardware manufacturers working overtime. The most amusing example in recent history was the ad blitz, convincing you that you really need that new iPhone; with Apple getting caught intentionally degrading the performance of older models. Watch for more of the same on all fronts...

    --
    Everyone is somebody else's weirdo.
    • (Score: 3, Insightful) by requerdanos on Wednesday April 04 2018, @04:09PM (2 children)

      by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @04:09PM (#662532) Journal

      CPUs have reached their maximum complexity, and there's really no use for more transistors.

      Though we undoubtedly agree on the underlying facts, I respectfully disagree with your conclusion. CPUs haven't reached their maximum complexity--nowhere close--and we are at a stone-age level of transistor density compared to what's coming. Single cores will, at some point, be mind-bogglingly orders of magnitude faster than what we currently have.

      I know that the Cyberdine systems processor for the Terminator, the Starfleet bio-neural gel packs, and 2001's HAL are still science fiction, but they're all pretty plausible and yesterday's science fiction in many areas is today's science fact.

      Just because CPU complexity seems to have plateaued with the conditions of current design and manufacturing methods doesn't make the plateau a principle that applies to the future of computing. Making genuine advances that leap ahead of current designs is hard and expensive, whereas making clever recombinations of existing tech is less so; therefore, we do more of the latter than the former (and end up with spectre and meltdown). But the genuine advances that leap ahead are coming, barring the end of civilization through some unforseen means.

      Maybe we'll use optical communications on-die to eliminate heat loss and allow us to go from 3GHz to 30GHz or 300GHz with the same power draw. Maybe unmapped properties of as yet untried materials will be found to have the side effect of moving electrons with a tenth the effort that we now have to undertake. Maybe Fairies will sprinkle magic dust on our Fabs. I don't know what the advance will be. But I know it's coming, and I am not writing off higher transistor density, and I am not writing off greater CPU complexity just because today's best engineers don't know the path to make them effective.

      Though many people have mistakenly thought so all throughout the past, today is not as good as it's ever going to get (even if it's the best it's ever been) whether we are talking about chip designs, or steam engines, or tools, or the wheel, or fire.

      Progress has been, and will continue to be, making things better.

      • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @04:34PM

        by Anonymous Coward on Wednesday April 04 2018, @04:34PM (#662541)

        I know that the Cyberdine systems processor for the Terminator, the Starfleet bio-neural gel packs, and 2001's HAL are still science fiction, but they're all pretty plausible and yesterday's science fiction in many areas is today's science fact.

        To be pedantic, those might be evolving past our current notion of a CPU, in which case poster above would be correct

      • (Score: 0) by Anonymous Coward on Thursday April 05 2018, @09:08AM

        by Anonymous Coward on Thursday April 05 2018, @09:08AM (#662836)

        Technologically there's a long way to go.

        We don't have an AI as smart as a crow with a brain the size of walnut and a relatively low power consumption.

        Even some insects may still be smarter than our AIs in many ways: https://www.pbs.org/newshour/science/intelligence-test-shows-bees-can-learn-to-solve-tasks-from-other-bees [pbs.org]

        Most quadcopters and UAVs can't fly as long without refuelling/recharging as most flies. There are other insects which do even better - monarch butterfly (44 hours nonstop flapping, 1000+ hours if gliding allowed: https://www.learner.org/jnorth/tm/monarch/FlightPoweredEnergyQ.html [learner.org] ) or a dragonfly (Pantala flavescens).

    • (Score: 5, Interesting) by requerdanos on Wednesday April 04 2018, @04:24PM (2 children)

      by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @04:24PM (#662539) Journal

      to keep adding performance... CPU manufacturers have been adding tricks like longer instruction pipelines and speculative execution. Which... cannot be allowed... if we care about security.

      There's an old joke that goes "Doc, it hurts when I do this!" "Well, don't do that."

      I remember the 70s, 80s, and early 90s when the prevailing thoughts on security were something like this:

      • It was hard to do and made things harder to get working
      • It was applied only in cases where there was a demonstrated need
      • it was assumed to do things like reduce performance and capability
      • which was okay, because "security" and "performance/capability" were two different goals, and you could pick one.

      Basically, OS makers, whether it was the unixes or DOSes or CP/Ms or Windowses, and software programmers of all and sundry kinds, worked to make good, functional software that did the task at hand with a minimum of bother. If the user/admin wanted to encumber that with security measures, so be it, that was understandable if you had such a need.

      If a perfectly good, working, operating system or piece of software could be attacked and crashed, burned, or compromised, no one was really surprised or alarmed by that, in the same way that no one went around saying that "Bob the Builder is a crappy contractor--look! If you put lit matches on his buildings when accelerants are present, the buildings burn right to the ground! Why didn't he allow for that?"

      In other words, if someone attacked and knocked over a perfectly good stack of software, then the software wasn't at fault--the attacker was.

      Gradually, over time, the attackers grew in number and bad attitude, and our views evolved to the current have-our-cake-and-eat-it-too attitude of wanting things not only functional, fast, and easy, but also completely secure, and right now, and oh yes, for a bargain price if you please.

      Maybe we should retreat a little, and get some perspective. The cheating tricks that make processors faster are, in my opinion, a terrific thing even if they come at the cost of security. It's a trade-off, and I would be happy even if I had to pick between "faster" and "bulletproof-secure".

      • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @07:06PM (1 child)

        by Anonymous Coward on Wednesday April 04 2018, @07:06PM (#662589)

        The cheating tricks that make processors faster are, in my opinion, a terrific thing even if they come at the cost of security. It's a trade-off, and I would be happy even if I had to pick between "faster" and "bulletproof-secure".

        It's a valid opinion, but be aware that many who have to pick between those don't just decide for themselves, in contrast with earlier days. Pre-Internet, nobody cared much about remote exploitation, and rightfully so. Pre-cloud, nobody cared much about local data exfiltration. But nowadays, your health data could very well be stored on the same node where someone else's Nodejs code is running, or where noob users are browsing unprotected on a remote desktop. That's an entirely different threat model, and requires an equally different approach to computing.

        In time, I hope that if you were CIO of a company and deliberately chose the "faster" option, it may well make your company criminally liable for data leaks. We already have examples of such cases in different industries, why not here?

        • (Score: 1, Flamebait) by requerdanos on Wednesday April 04 2018, @07:59PM

          by requerdanos (5997) Subscriber Badge on Wednesday April 04 2018, @07:59PM (#662614) Journal

          if you were CIO of a company and deliberately chose the "faster" option

          Depending on the application, this might or might not be the correct choice. As you mention (but forgot by the closing paragraph? Do you have some extreme A.D.D.?), it's a valid opinion, and connections to the public Internet influence which choice would be correct.

          Just because security is more and more commonly important vs. performance, that doesn't make it the right choice 100% of the time. Failing to recognize that and wishing jail time on someone based on the failure is pretty shortsighted.

  • (Score: 0) by Anonymous Coward on Wednesday April 04 2018, @06:53PM (1 child)

    by Anonymous Coward on Wednesday April 04 2018, @06:53PM (#662584)

    we've got to have open hardware. put these minions of the surveillance state out to pasture.

    • (Score: 0) by Anonymous Coward on Thursday April 05 2018, @12:38AM

      by Anonymous Coward on Thursday April 05 2018, @12:38AM (#662709)

      While a noble idea in practice no one will care. Most people do not have a fab. Unlike open source code anyone with a computer can look and build it. CPUs not so much. Do you really think you can noodle out billions of traces and figure out what they do? You may be able to work out some bits here and there. Sorry, an interesting idea that does nothing.

  • (Score: 0) by Anonymous Coward on Thursday April 05 2018, @12:30AM (2 children)

    by Anonymous Coward on Thursday April 05 2018, @12:30AM (#662705)

    32 lawsuits filed [arstechnica.com]. I hope the content is made public, it will be fun seeing how Intel tries to squirm out of it.

    • (Score: 0) by Anonymous Coward on Thursday April 05 2018, @12:40AM (1 child)

      by Anonymous Coward on Thursday April 05 2018, @12:40AM (#662710)

      it will be fun seeing how Intel tries to squirm out of it
      Yeah it is so 'fun' to watch people squirm over something that pretty much affects everyone in the world. Making them squirm will fix everything! Lets put itching powder in their shorts too that way they can squirm MORE!!!!

      • (Score: 0) by Anonymous Coward on Thursday April 05 2018, @08:46AM

        by Anonymous Coward on Thursday April 05 2018, @08:46AM (#662831)

        They knew perfectly well what they did. And they did not care. This is fraud. Thus they should suffer.

(1)