Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Tuesday June 26 2018, @12:40AM   Printer-friendly
from the I-predict-another-one-in-six-months-tops dept.

Recompiling is unlikely to be a catch-all solution for a recently unveiled Intel CPU vulnerability known as TLBleed, the details of which were leaked on Friday, the head of the OpenBSD project Theo de Raadt says.

The details of TLBleed, which gets its name from the fact that the flaw targets the translation lookaside buffer, a CPU cache, were leaked to the British tech site, The Register; the side-channel vulnerability can be theoretically exploited to extract encryption keys and private information from programs.

Former NSA hacker Jake Williams said on Twitter that a fix would probably need changes to the core operating system and were likely to involve "a ton of work to mitigate (mostly app recompile)".

But de Raadt was not so sanguine. "There are people saying you can change the kernel's process scheduler," he told iTWire on Monday. "(It's) not so easy."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Knowledge Troll on Tuesday June 26 2018, @01:30AM (8 children)

    by Knowledge Troll (5948) on Tuesday June 26 2018, @01:30AM (#698518) Homepage Journal

    I'm the security minded person and advocate at work (aka, I give a shit and try to drag other people along into the give a shit club too, with varying degrees of success) and I disseminated this information inside the organization. I feel like it is important to share these threats so they are well understood but no one, including myself, knows what to do in the face of workstations and servers built on CPUs where the security guarantees do not hold true. Further people are asking me what good this information does with out some action to take - I can't say I disagree with that.

    This one is troublesome because Intel doesn't seem to feel the need to admit that it is a problem (perhaps if they can just keep saying it isn't a problem they won't get sued?) and there doesn't appear to be any mitigation available except to disable Hyper-threading which itself is not something generally available to configure. The OpenBSD technique of taking the hyperthread CPUs off line seems like it is the only option. It seems like this would be possible in Linux, I recall off lining a running CPU before but it's been a while and it wasn't on X86.

    I asked one of our VPs to meet with me tomorrow so we can formulate the organization level response to information such as this but honestly I'm not sure what we can do. The devs can't stop working and they can't stop accessing systems that need to be considered as secure.

    This is particularly aggravating.

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 0) by Anonymous Coward on Tuesday June 26 2018, @01:41AM (1 child)

    by Anonymous Coward on Tuesday June 26 2018, @01:41AM (#698523)

    Invest in ARM? I saw some real nice 32-core workstations the other day...

    • (Score: 3, Touché) by Knowledge Troll on Tuesday June 26 2018, @02:23AM

      by Knowledge Troll (5948) on Tuesday June 26 2018, @02:23AM (#698540) Homepage Journal

      Invest in ARM?

      That might help my pocket book but not maintaining some level of sanity at work, at least in the immediate term. Also ARM was victim to Spectre/Meltdown and probably more in the future so the problem still remains that we use CPUs that we can't trust to actually maintain the security models we require.

  • (Score: 3, Informative) by http on Tuesday June 26 2018, @02:37AM (3 children)

    by http (1920) on Tuesday June 26 2018, @02:37AM (#698549)

    The "what to do" seems straightforward -

    1. no cloud services outside the org, period. You absolutely should not trust your data to a machine that I can rent time on. This trick is too cool.
    2. trust your devs. if you don't trust what they're doing on mission-critical machines, or have a way to track their activities thereon, you have bigger problems than TLBleed.

      ...but that means spending money and time, something few businesses are willing to do if they've gotten used to 'not doing' up to now.

    --
    I browse at -1 when I have mod points. It's unsettling.
    • (Score: 4, Informative) by Knowledge Troll on Tuesday June 26 2018, @03:05AM (2 children)

      by Knowledge Troll (5948) on Tuesday June 26 2018, @03:05AM (#698569) Homepage Journal

      This analysis seems to focus specifically on servers. And according to our devs we should trust our devs and I do in fact trust our devs. I do not think our devs would do anything malicious. That's not the problem.

      You can trust the developers to be good people but you can't trust the developer accounts to only be used by the developers. The workstations the developers use are also vulnerable to these problems as well and security in their workstations is probably more important than security in our servers because our servers aren't running javascript from random websites.

      This is what I mean by we can't just have our developers stop working - they need to interact with the world to work. And do it on a machine that is demonstrating it is not fit for the task.

      W-T-F

      • (Score: 1, Interesting) by Anonymous Coward on Tuesday June 26 2018, @07:45AM

        by Anonymous Coward on Tuesday June 26 2018, @07:45AM (#698641)

        So you're afraid their workstations will get compromised.
        I guess it sucks for some usecases, but the penalty for slowing down workstations is not that great, so just do your best to disable hyperthreading.

      • (Score: 1, Interesting) by Anonymous Coward on Tuesday June 26 2018, @09:38PM

        by Anonymous Coward on Tuesday June 26 2018, @09:38PM (#698976)

        First, I assume you're a high value private company, but not military or centrifuge manufacturing or anything. That said:

        Buy your devs $150 laptops.

        Put those on a different network.

        Let them bring in data that way. But take the in-office network off the internet.

        Thusly, it becomes very hard for your devs to browse to stackoverflow, load a poisoned ad, and compromise your perimeter. Instead, the laptop network will be 'taken' and that's just fine.

        When they need to move data that they can't crunch on the laptops or that they need to push to the public, USB sticks. Yeah, that's an in/exfiltration opportunity. No, it's nothing compared to being hooked into the net. A nation state actor will compromise you, even if they have to walk up to your building and do it in person. But your competitors might not, and general wannabe crackers sure won't.

        $150/head is a lot cheaper than ... just about any other option! Plus a few K to set up a linux image and ongoing support as they need reflashing, but IT gonna IT.

  • (Score: 3, Informative) by DrkShadow on Tuesday June 26 2018, @04:23AM

    by DrkShadow (1404) on Tuesday June 26 2018, @04:23AM (#698598)

    You need to consider the business case.

    What can you do? -> Take vulnerable machines out of vulnerable areas.

    How important is it that those machines be kept secure? Is it "fairly" important? or is it absolutely, life-riskingly critical? If the former, then how likely is it that your network will be breached, and the machine will be breached, and how likely is it that someone cares enough to do that, and what mitigations do you have in place already? What is the cost of additional mitigations (developer time, morale, solution cost), and what's likelyhood of those preventing a breach? What, then, is the actual cost of a breach?

    If it's life-riskingly critical, then you should be considering taking those machines off the network. Air-gapped machines are a great deal harder to compromise, and with proper policies forbidding things like USB-key usage (I mean locks in the USB ports), it's an effective means of protection. Is the level of risk worth the slow-down for developers, being unable to look things up and quickly/easily fix code? Is the risk such that you need to prevent developers from going in or out with USB keys?

    You need to make a business case, establish the mitigations (not fixes -- there will always be another hole, but make it so that it takes too long to reasonably find/exploit those holes), and justify that cost as being more worthwhile than the alternative.

  • (Score: 0) by Anonymous Coward on Tuesday June 26 2018, @08:59AM

    by Anonymous Coward on Tuesday June 26 2018, @08:59AM (#698667)

    people are asking me what good this information does with out some action to take

    You now know to avoid running arbitrary code on remote machines. So, remote RDP clients should be replaced with per-user machines and remote storage servers 90s style.

    A mail server here... A CIFS servers there... Maybe the odd git/SQL server? University lab best practices baby. KERBEROS for life. Put on Sublime and mellow out to the screams of a thousand clients losing their one and only local copy since they're too stupid to save to the network drive.

    Aha, I miss the good ol' days.