Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 17 submissions in the queue.
posted by mrpg on Tuesday June 26 2018, @12:40AM   Printer-friendly
from the I-predict-another-one-in-six-months-tops dept.

Recompiling is unlikely to be a catch-all solution for a recently unveiled Intel CPU vulnerability known as TLBleed, the details of which were leaked on Friday, the head of the OpenBSD project Theo de Raadt says.

The details of TLBleed, which gets its name from the fact that the flaw targets the translation lookaside buffer, a CPU cache, were leaked to the British tech site, The Register; the side-channel vulnerability can be theoretically exploited to extract encryption keys and private information from programs.

Former NSA hacker Jake Williams said on Twitter that a fix would probably need changes to the core operating system and were likely to involve "a ton of work to mitigate (mostly app recompile)".

But de Raadt was not so sanguine. "There are people saying you can change the kernel's process scheduler," he told iTWire on Monday. "(It's) not so easy."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by realDonaldTrump on Tuesday June 26 2018, @12:58AM (11 children)

    by realDonaldTrump (6614) on Tuesday June 26 2018, @12:58AM (#698499) Homepage Journal

    And if it's old enough, possibly it's OK. Some folks have new computer. And if it's not Intel brand, possibly OK. And some folks don't have computer, they're definitely OK. Unless they rent The Cloud.

    This is a great opportunity for Intel. Bring out the new cyber without the bugs. Let folks trade in the old cyber and get a discount. Like #CashForClunkers [twitter.com] Factories will stay very busy!

    • (Score: 4, Informative) by edIII on Tuesday June 26 2018, @01:06AM (9 children)

      by edIII (791) on Tuesday June 26 2018, @01:06AM (#698504)

      FUCK that nonsense. I'm more interested in secure computing to the extent I will take lesser powered and lesser featured CPUs, and will never trust Intel again. They were deeply arrogant the entire time about their management engine.

      Why would I buy Intel *again*?

      --
      Technically, lunchtime is at any moment. It's just a wave function.
      • (Score: 4, Interesting) by Runaway1956 on Tuesday June 26 2018, @01:26AM (8 children)

        by Runaway1956 (2926) Subscriber Badge on Tuesday June 26 2018, @01:26AM (#698513) Journal

        I've been an AMD guy for a long time. The last Intel I owned was a P3. I don't like the corporate attitude about things like tracking users. I don't like the way they lock their chips up, forcing you to pay for each and every feature. That's a holdover from the days of mainframes. I never did like their obsession with speed.

        With AMD, they're happy with a slower clock speed. The focus is on "real world use", and it has been for quite a long while. So, you lose a few cycles in speed, but the chip is tuned to do things that users actually do with those chips. I've been happy with that. Overall, AMD has a better philosophy, which leads to a better thought out design.

        So, actually, I've sacrificed nothing by using AMD.

        • (Score: 0) by Anonymous Coward on Tuesday June 26 2018, @04:52AM

          by Anonymous Coward on Tuesday June 26 2018, @04:52AM (#698604)

          Actually amd should also be faster and cheaper next cycle, eg threadripper 2 next year.

        • (Score: 2, Interesting) by anubi on Tuesday June 26 2018, @12:11PM (6 children)

          by anubi (2828) on Tuesday June 26 2018, @12:11PM (#698704) Journal

          Runaway:

          If you are an AMD guy, maybe you can tell me if I have just got ahold of a decent chip... AMD FD6120WMW6KGU.

          I recovered it out of a dumpster. The case was smashed, the motherboard cracked, but it appears the chip and several gigabytes of RAM modules survived.

          Judging from which company had it, I surmise it must have been a top-flight CPU, as I have never seen businesses like I got it from care all that much about how much things cost.

          If its a decent chip, maybe I try to get another board for it, and build up a Linux box.

          --
          "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
          • (Score: 2) by RS3 on Tuesday June 26 2018, @01:26PM

            by RS3 (6367) on Tuesday June 26 2018, @01:26PM (#698732)

            Sorry, I'm not "Runaway". I'm not sure what you consider to be a "decent chip", but that CPU is a couple of notches better than anything I have. I rarely do anything highly CPU intensive, and when I do, I have enough computers that I can let one compile or render and I'll use another one. Shame they smashed it up. I think it should be criminal to destroy anything that someone else can use. It's just wasteful. I'll discuss that (economics) another time.

            I would get an MB for it. Try to do a little research- I have no idea what to recommend. For sure run memtest86 fully before trusting it.

            I have a couple of trash-picked machines that had very bad RAM- both DIMMs- one had a memtest86 DVD in it, so I know they were on the right track, they just didn't know to run 1 DIMM at a time. Or perhaps they didn't know they could replace RAM? In one, the RAM was shared by the graphics chip, so the screen never displayed. Tested that DIMM in a known good machine to discover that problem.

            I'd like to help you dive that dumpster! Reminds me- I have a beautiful Trek that came out of a dumpster. It was dusty and tires flat but I rode it for years and finally changed 1 tire.

          • (Score: 3, Informative) by RS3 on Tuesday June 26 2018, @01:50PM (2 children)

            by RS3 (6367) on Tuesday June 26 2018, @01:50PM (#698750)
            • (Score: 1) by anubi on Wednesday June 27 2018, @07:41AM (1 child)

              by anubi (2828) on Wednesday June 27 2018, @07:41AM (#699186) Journal

              Thanks for the link, RS3! I did not know if it would be worth it to try to rebuild a system, given just the CPU and some memory. Its been my experience that anything with big heat sinks may be old technology that's best left alone. My cellphone may be more powerful.

              But sometimes they are little screamers.

              I was wondering about using it on some CAD programs that I know my present laptop ( Walmart HP / celeron ) would choke on. ( Hell, this thing chokes on the later Firefox - but works fine with SeaMonkey ). What I have now works fine for web browsing, emails, and doing smaller EAGLE stuff ( but its noticeably struggling to keep up with a 40 sq.in 2-layer PCB! ).

              Thanks for the info snippet... that led me to the AM3+ socket, which led me to quite a number of suppliers of motherboards which seem to support it. I would like to look more at those mini-ATX form factors. This thing had a massive heatsink, so it might be a good idea to liquid cool this thing... probably use some of that "EVANS" waterfree coolant used in cars where corrosion is a big problem. But I did not want to invest much time messing with it unless I was going to get a pretty potent system that would run big PCB or Solidworks/3dFusion kind of programs. I don't even want to think about loading that kind of stuff on a machine that buckles under a web browser!

              I guess it would be my kind of luck to get the rest of the parts, then discover the CPU has been damaged.

              --
              "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
              • (Score: 2) by RS3 on Wednesday June 27 2018, @01:13PM

                by RS3 (6367) on Wednesday June 27 2018, @01:13PM (#699278)

                Well, if the CPU turns out damaged, and you're sure because you tested the RAM elsewhere or bought new, you could always buy a nice AM3+ CPU on ebay.

                I'm almost embarrassed to admit how long ago I bought my first AMD CPU. I'm pretty open-minded in general, and I see advantages and disadvantages.

                You mentioned "large heatsink". Did it have a fan with it, or is it getting cooling from the case fan? If the latter, it should be obvious why it needs to be larger, and it's mostly due to noise. Spinning fan blades very close to heatsink fins becomes a small siren, and not the sailor enchantress kind, but it will attract a certain breed of nerd. But I digress- to make computers quieter, they run fans slower, moved away from the heatsink, and need more heatsink surface area to achieve sufficient cooling.

                Plus, AMD stuff tends to run hotter for a competing-class CPU. For your found CPU it's: "Thermal Design Power: 95 Watt" You can look up the TDP for any CPU.

                So use the big heatsink / case fan, or buy a heatsink/fan combo. It'll spin up and be noisy for renders, compiles, some CAD stuff, maybe playing full-screen videos, etc.

          • (Score: 3, Informative) by Runaway1956 on Tuesday June 26 2018, @02:59PM (1 child)

            by Runaway1956 (2926) Subscriber Badge on Tuesday June 26 2018, @02:59PM (#698785) Journal

            RS3 already supplied you with the reference page. Yes, that is a very nice CPU. I prefer the Opterons over the FX because Opterons is generally given a larger cache. But, my wife runs the Octocore cousin to your Hexacore. Depending on exactly when it was purchased, it was probably "top flight" as you say. If purchased a couple years into it's life cycle, then it was already "obsolete", but it's still very nice.

            OOOOHHHHHHHH -

            Notes on AMD FX-6120

                    The processor has unlocked clock multiplier
                    Turbo Core frequency is not confirmed

            Your CPU is a little bit special. You can overclock that thing until it begins to melt down, then back it off a little, and it will run for a couple years. Or, you can back it off a little more, and it will probably run for a decade.

            Of course, you need to understand that the hexacore is actually an octocore, with two cores locked out, because they failed QC. So, you know you have two bad cores on the die, and the remaining cores may or may not withstand radical overclocking. Still - a very nice chip!!

            It used to be great fun, examining the serial numbers on the CPU's to find those batches that could be overclocked easily. Nowadays, more and more chips are just unlocked, and it's up to the overclocker to figure out what his chip's limits are.

            • (Score: 1) by anubi on Wednesday June 27 2018, @08:38AM

              by anubi (2828) on Wednesday June 27 2018, @08:38AM (#699199) Journal

              Thanks!

              You confirmed a suspicion of mine that its already obsolete, but still maybe worth bringing back up. At least I did not dig up an old Pentium II class part, which I believe my modern cheapie BLU phone would overpower. Kinda sad to hear that I may already have two bad cores... I only hope the locked out cores also have their power killed, so they don't contribute to the thermal burden.

              I guess its nice to know the chip is a little special, but I will trade off speed for longevity almost any day. I have a lot of stuff now decades old, still working fine. 386sx. Those were powerful enough to do damn near anything I had in mind, excluding anything centered around images ( including CAD / simulation ) or gaming. I guess the motherboards have straps on them to select how fast I clock the CPU.

              Here's hoping that multiple cores will let things like EAGLE, Solidworks, or 3DFusion run faster, being hopefully the system will be running in other cores. I have arrived at the conclusion that context switching can eat up a lot of time when I am trying to multitask a whole bunch of crap onto ONE core.

              The little celeron I use all the time right now simply does not have the power to run modern software. It will just go to 100% CPU utilization, then some critical OS interrupts go unserviced, then it will lock up.

              The big heat sink that was on it concerned me though... that chip obviously ran hot, and according to the spec sheet, like 95 watts had to be gotten rid of? I am wondering about liquid cooling it... I may end up with a much larger automotive style radiator somewhere else to get rid of the heat.

              I just hope the entity before me did not thermally damage the CPU. I believe whoever threw this away did not know what he was doing; had he known, he would not have disposed of it this way, as much physical damage was done, with considerable effort, but the disk drive full of data-in-the-clear was still intact. The power supply and I/O card rack took the brunt of the damage. Had I been tasked with decommissioning the machine, I would have removed the disk drive first, then gently placed the machine beside the dumpster for whoever wanted it, that is if I could not have the physical machine myself. The last thing the machine would have done is DBANned its own hard drive. But then, they didn't hire me to decommission it. Nice 2-TB SATA drive. 90% was still unused. Found out by plugging it into a USB to SATA adapter, and mounting it as an external USB drive.

              I had no interest in the previous owner's stuff, so I "deleted" it. You know, del *.*. That only set the file allocation table so that no sectors constitute a listed file, and all sectors are available to be written to. Now using it to store CloneZilla image backups of my other systems onto it. So far, the images have been verifying as viable. But, its not my only backup. I will have a hard time trusting it, seeing what it has already been through.

              But, it never hurts to have redundant backups.

              I am simply not the type to fish through that disk for personal info, even though it was there. Misuse of stuff like that never comes to any good for anyone.

              --
              "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
    • (Score: 0) by Anonymous Coward on Tuesday June 26 2018, @08:28AM

      by Anonymous Coward on Tuesday June 26 2018, @08:28AM (#698659)

      If companies and governments need to recompile everything then they'll start demanding cross-compatible sources and licenses to modify while making sure any future projects will abandon stuff like C++ in favor of stuff like Java. At that point the x86 and Intel are FUBAR.

  • (Score: 1, Informative) by Anonymous Coward on Tuesday June 26 2018, @01:12AM

    by Anonymous Coward on Tuesday June 26 2018, @01:12AM (#698506)

    Time to open a wormhole and revert to older hardware:

    https://en.wikipedia.org/wiki/Contiki [wikipedia.org]

    Or do THOSE systems have a possible back-door(s) in them?

  • (Score: 3, Interesting) by Knowledge Troll on Tuesday June 26 2018, @01:30AM (8 children)

    by Knowledge Troll (5948) on Tuesday June 26 2018, @01:30AM (#698518) Homepage Journal

    I'm the security minded person and advocate at work (aka, I give a shit and try to drag other people along into the give a shit club too, with varying degrees of success) and I disseminated this information inside the organization. I feel like it is important to share these threats so they are well understood but no one, including myself, knows what to do in the face of workstations and servers built on CPUs where the security guarantees do not hold true. Further people are asking me what good this information does with out some action to take - I can't say I disagree with that.

    This one is troublesome because Intel doesn't seem to feel the need to admit that it is a problem (perhaps if they can just keep saying it isn't a problem they won't get sued?) and there doesn't appear to be any mitigation available except to disable Hyper-threading which itself is not something generally available to configure. The OpenBSD technique of taking the hyperthread CPUs off line seems like it is the only option. It seems like this would be possible in Linux, I recall off lining a running CPU before but it's been a while and it wasn't on X86.

    I asked one of our VPs to meet with me tomorrow so we can formulate the organization level response to information such as this but honestly I'm not sure what we can do. The devs can't stop working and they can't stop accessing systems that need to be considered as secure.

    This is particularly aggravating.

    • (Score: 0) by Anonymous Coward on Tuesday June 26 2018, @01:41AM (1 child)

      by Anonymous Coward on Tuesday June 26 2018, @01:41AM (#698523)

      Invest in ARM? I saw some real nice 32-core workstations the other day...

      • (Score: 3, Touché) by Knowledge Troll on Tuesday June 26 2018, @02:23AM

        by Knowledge Troll (5948) on Tuesday June 26 2018, @02:23AM (#698540) Homepage Journal

        Invest in ARM?

        That might help my pocket book but not maintaining some level of sanity at work, at least in the immediate term. Also ARM was victim to Spectre/Meltdown and probably more in the future so the problem still remains that we use CPUs that we can't trust to actually maintain the security models we require.

    • (Score: 3, Informative) by http on Tuesday June 26 2018, @02:37AM (3 children)

      by http (1920) on Tuesday June 26 2018, @02:37AM (#698549)

      The "what to do" seems straightforward -

      1. no cloud services outside the org, period. You absolutely should not trust your data to a machine that I can rent time on. This trick is too cool.
      2. trust your devs. if you don't trust what they're doing on mission-critical machines, or have a way to track their activities thereon, you have bigger problems than TLBleed.

        ...but that means spending money and time, something few businesses are willing to do if they've gotten used to 'not doing' up to now.

      --
      I browse at -1 when I have mod points. It's unsettling.
      • (Score: 4, Informative) by Knowledge Troll on Tuesday June 26 2018, @03:05AM (2 children)

        by Knowledge Troll (5948) on Tuesday June 26 2018, @03:05AM (#698569) Homepage Journal

        This analysis seems to focus specifically on servers. And according to our devs we should trust our devs and I do in fact trust our devs. I do not think our devs would do anything malicious. That's not the problem.

        You can trust the developers to be good people but you can't trust the developer accounts to only be used by the developers. The workstations the developers use are also vulnerable to these problems as well and security in their workstations is probably more important than security in our servers because our servers aren't running javascript from random websites.

        This is what I mean by we can't just have our developers stop working - they need to interact with the world to work. And do it on a machine that is demonstrating it is not fit for the task.

        W-T-F

        • (Score: 1, Interesting) by Anonymous Coward on Tuesday June 26 2018, @07:45AM

          by Anonymous Coward on Tuesday June 26 2018, @07:45AM (#698641)

          So you're afraid their workstations will get compromised.
          I guess it sucks for some usecases, but the penalty for slowing down workstations is not that great, so just do your best to disable hyperthreading.

        • (Score: 1, Interesting) by Anonymous Coward on Tuesday June 26 2018, @09:38PM

          by Anonymous Coward on Tuesday June 26 2018, @09:38PM (#698976)

          First, I assume you're a high value private company, but not military or centrifuge manufacturing or anything. That said:

          Buy your devs $150 laptops.

          Put those on a different network.

          Let them bring in data that way. But take the in-office network off the internet.

          Thusly, it becomes very hard for your devs to browse to stackoverflow, load a poisoned ad, and compromise your perimeter. Instead, the laptop network will be 'taken' and that's just fine.

          When they need to move data that they can't crunch on the laptops or that they need to push to the public, USB sticks. Yeah, that's an in/exfiltration opportunity. No, it's nothing compared to being hooked into the net. A nation state actor will compromise you, even if they have to walk up to your building and do it in person. But your competitors might not, and general wannabe crackers sure won't.

          $150/head is a lot cheaper than ... just about any other option! Plus a few K to set up a linux image and ongoing support as they need reflashing, but IT gonna IT.

    • (Score: 3, Informative) by DrkShadow on Tuesday June 26 2018, @04:23AM

      by DrkShadow (1404) on Tuesday June 26 2018, @04:23AM (#698598)

      You need to consider the business case.

      What can you do? -> Take vulnerable machines out of vulnerable areas.

      How important is it that those machines be kept secure? Is it "fairly" important? or is it absolutely, life-riskingly critical? If the former, then how likely is it that your network will be breached, and the machine will be breached, and how likely is it that someone cares enough to do that, and what mitigations do you have in place already? What is the cost of additional mitigations (developer time, morale, solution cost), and what's likelyhood of those preventing a breach? What, then, is the actual cost of a breach?

      If it's life-riskingly critical, then you should be considering taking those machines off the network. Air-gapped machines are a great deal harder to compromise, and with proper policies forbidding things like USB-key usage (I mean locks in the USB ports), it's an effective means of protection. Is the level of risk worth the slow-down for developers, being unable to look things up and quickly/easily fix code? Is the risk such that you need to prevent developers from going in or out with USB keys?

      You need to make a business case, establish the mitigations (not fixes -- there will always be another hole, but make it so that it takes too long to reasonably find/exploit those holes), and justify that cost as being more worthwhile than the alternative.

    • (Score: 0) by Anonymous Coward on Tuesday June 26 2018, @08:59AM

      by Anonymous Coward on Tuesday June 26 2018, @08:59AM (#698667)

      people are asking me what good this information does with out some action to take

      You now know to avoid running arbitrary code on remote machines. So, remote RDP clients should be replaced with per-user machines and remote storage servers 90s style.

      A mail server here... A CIFS servers there... Maybe the odd git/SQL server? University lab best practices baby. KERBEROS for life. Put on Sublime and mellow out to the screams of a thousand clients losing their one and only local copy since they're too stupid to save to the network drive.

      Aha, I miss the good ol' days.

  • (Score: 4, Informative) by Anonymous Coward on Tuesday June 26 2018, @01:51AM (1 child)

    by Anonymous Coward on Tuesday June 26 2018, @01:51AM (#698527)

    The obvious fix is gang scheduling, but sadly de Raadt is correct about it being not so easy... well at least if you want to keep performance. On the bright side, you can get back the hyperthreading.

    Gang scheduling is when you schedule a group of tasks all together. Traditionally it was a performance idea, for cases such as rapidly communicating processes. It's best to schedule them as a group, letting them have all the processors, then schedule away from them as a group. The entire system is handed to a cooperating group of tasks, rather than handing out 1 processor at a time.

    For the security situation, the requirement is that the gang be defined according to security. For example, if a pair of tasks can ptrace (debug) each other, they belong in the same gang.

    This nicely deals with future security attacks that we are all expecting. For example, SMP suffers from bus contention. Each CPU can detect that some other CPU is using the memory bus (or crossbridge or whatever) and therefore deduce something about what is going on in the other task. If all tasks are in the same security context, this isn't a concern.

    The difficulty is that the kernel is a distinct security context. Unless all tasks enter/leave the kernel at the same time, there is a problem. The kernel is a different security context. Pausing tasks might be possible, but of course then processors sit idle during that time.

    We also get a bit more trouble from the proliferation types of security contexts in the modern world. Browser sandbox content is different from the rest of the browser. OpenBSD has the "pledge" system call, causing more trouble. New types of security distinctions keep popping up to further subdivide the system, and all of these distinctions must be taken into account for the gang scheduling.

    • (Score: 3, Interesting) by Knowledge Troll on Tuesday June 26 2018, @02:20AM

      by Knowledge Troll (5948) on Tuesday June 26 2018, @02:20AM (#698539) Homepage Journal

      That part where the kernel running is mutually exclusive with any other process running sounds like an absolutely massive issue but it does handle the case that De Raadt points out where a process in one hyperthread can get info from another process on the companion hyperthread executing in the kernel. Thank you for introducing the concept of gang scheduling though.

(1)