Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday May 09 2014, @02:50PM   Printer-friendly
from the the-gift-that-keeps-on-giving dept.

Ars Technica reports that four weeks after its disclosure huge swaths of the Internet remain vulnerable to Heartbleed. The article suggests that over 300,000 servers remain vulnerable.

What steps have you taken to protect yourself from this bug? What browser addons have you installed? Have you checked/updated the firmware on your home router? If you work in IT, what has the reaction been? Has your site been compromised? Has vulnerable code been updated, new keys genned, new certificates obtained, and old ones revoked?

Since the OpenSSL library is now undergoing a security review and a fork of it is underway as LibreSSL, it is possible that other vulnerabilities will be discovered. Then what? How likely is it that we will need to repeat this cleanup effort?

(more after the break)

The Heartbleed bug "is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected, under normal conditions, by the SSL/TLS encryption used to secure the Internet." The bug affects not only computer servers, but also routers and even some Android phones, too. Even software like LibreOffice, WinSCP, and FileMaker have versions with the bug and need to be updated. The history, behavior, and impact of this bug are well-explained and summarized on Wikipedia. Therein is this recommendation:

Although patching software (the OpenSSL library and any statically linked binaries) fixes the bug, running software will continue to use its in-memory OpenSSL code with the bug until each application is shut down and restarted, so that the patched code can be loaded. Further, in order to regain privacy and secrecy, all private or secret data must be replaced, since it is not possible to know if they were compromised while the vulnerable code was in use:[68]

  • all possibly compromised private key-public key pairs must be regenerated,
  • all certificates linked to those possibly compromised key pairs need to be revoked and replaced, and
  • all passwords on the possibly compromised servers need to be changed.

SN's coverage of this vulnerability includes:

Related Stories

Major OpenSSL Implementation Flaw Discovered 33 comments

An advisory (link: https://www.openssl.org/news/secadv_20140407.txt ) has been released concerning an implementation bug in several versions of the widely used OpenSSL software.

"A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server. Only 1.0.1 and 1.0.2-beta releases of OpenSSL are affected including 1.0.1f and 1.0.2-beta1."

The advisory states that 1.0.1 users can resolve the issue by upgrading to 1.0.1g or recompiling using the -DOPENSSL_NO_HEARTBEATS switch. Users of 1.0.2 will need to wait for the next beta release to get this closed.

This website (link: http://heartbleed.com/ ) has been created to spread accurate details of the bug, which was determined to have been seen in releases of OpenSSL dating back to December 2011. Many websites and services are affected, including Mojang's decision to completely shut down the account authentication servers for Minecraft while the patch is being put in place.

OpenSSL: Heartbleed - the Fallout. 43 comments

After reporting the problems with OpenSSL, which has been nicknamed 'HeartBleed', 2 contributors have forward articles on why you should change your passwords.

Heartbleed, and why you should change your password

I always believed Mojang would keep my details safe, now I realise they are not in control of their own data. Mojang/Minecraft passwords should be changed immediately

Heartbleed Bug: Change All Your Passwords

The fallout from the Heartbleed bug is hitting the mainstream. The BBC has an article headlined "Public urged to reset all passwords".

Bruce Schneier calls it "catastrophic", giving this advice to sysadmins: "After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected." He also links to a webpage that will let you test servers for the bug, and an article on Ars Technica discussing the bug.

Historical Heartbleed Attacks Possibly Logged 17 comments

The EFF has called on admins to check any historical packet capture logs for evidence of Heartbleed attacks in 2013 and earlier. They examined reports from Ars Technica of people coming forward with logs potentially showing in-the-wild Heartbleed attacks long before the recent public disclosure. Perhaps most intersting-

[the] logs had been stored on magnetic tape in a vault. The source IP addresses for the attack were 193.104.110.12 and 193.104.110.20. Interestingly, those two IP addresses appear to be part of a larger botnet that has been systematically attempting to record most or all of the conversations on Freenode and a number of other IRC networks. This is an activity that makes a little more sense for intelligence agencies than for commercial or lifestyle malware developers.

Coincidentally, a few hours prior to this news, I was lamenting here in comments how disinformative the mainstream reporting was when it made claims that "what makes it even worse is the heartbleed attack leaves no trace". Of course it leaves a trace- perhaps not in stock os/webserver log files, but remote attackers always have to carry the attack out via networks, which can notice and/or log the traffic if they take the trouble to. Not to put too fine a point on it, but the same thing is also relevant to the recent slashcode issue with portscans. It may be exhausting work inspecting packet capture logs, but if you make a habit of not doing it, you should be prepared to find some gremlins when you finally get around to it.

Heartbleed and Silence From Admins 32 comments

By now even Joe Average has heard about Heartbleed, and possibly even was told something accurate.

Well and good, but there's one thing missing: how does Joe know that it's time to change all of his passwords? The Register sums things up thusly:

But to fully clean up the problem, admins of at-risk servers should generate new public-private key pairs, destroy their session cookies, and update their SSL certificates before telling users to change every potentially compromised password on the vulnerable systems.

I have logins and passwords on probably 50 to 75 sites. To date not one has e-mailed me to say "Hey, it's all fixed, change your password!" Likewise none of them seems to have posted a similar notice on their log-in page. Does anyone else feel like they're left hanging?

Reverse Heartbleed Client Vulnerability 8 comments

From Testing for reverse Heartbleed courtesy of Schneier's blog:

"Anything that speaks TLS using OpenSSL is potentially vulnerable, but there are two main classes of client apps that are worth mentioning:

  1. Traditional clients are things like web browsers, apps that use HTTP APIs [snip]
  2. Open agents are clients that can be driven by an attacker but don't reside on an attacker's machine. If you can direct some remote application to fetch a URL on your behalf, then you could theoretically attack that application. The web is full of applications that accept URLs and do something with them; any of these have the potential to be vulnerable [snip]"

The main conclusion so far is that one has to purge all flawed versions of OpenSSL from all computers: server or client makes no real difference, firewalls make no real difference either as the bug now works both inbound and outbound.

There is also a Reverse Heartbleed Tester.

OpenSSL to Get Funding 29 comments

It's often said that "you get what you pay for", but when it comes to free software, this doesn't apply. You often get a lot more. However, you do get what someone pays for. Software development takes time and money, and without substantial donations, sponsorship, etc., a free-software project will be limited to what volunteers can achieve in their own time.

According to an article in Ars Technica, the security software OpenSSL has one full-time employee and receives about $2000 a year in donations. It's therefore not surprising that bugs aren't always caught before they cause problems.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by Lagg on Friday May 09 2014, @03:21PM

    by Lagg (105) on Friday May 09 2014, @03:21PM (#41281) Homepage Journal

    I won't complain about SN milking this stuff because it's just people submitting what they think might spur interesting discussion but I will complain about the shameless milking the source of the articles are doing.

    Yes. I work in what you could call IT (I'm a freelance programmer but I end up doing everything at some point). Know what we did? Updated our packages, updated our openssl install and in some cases made new certificates. Then afterwards we got on with things that were actually important and more importantly our lives. Which is what these article authors are clearly not doing.

    Here's the thing. Heartbleed is a bug. A very serious security bug but a bug nonetheless. It's one of millions of its kind. It will probably live longer than I do just like any of the other horrible unpatched bugs on many systems that are not properly maintained. But do we write articles continuously about that null dereference bug from 1986 even though in a lot of cases a bad dereference can cause a hell of a lot more harm than this can? No. Because they can't keep the hysteria alive that way.

    Maybe I should dig up one of said bugs and call it Pointergate and tell people how they can potentially set off an atom bomb by whistling lovingly to the pointer. These article authors disgust me (but that's nothing new from Arse Technica). But again the submitter and SN do not. This is what it's here for. Remember that.

    --
    http://lagg.me [lagg.me] 🗿
    • (Score: 2) by GreatAuntAnesthesia on Friday May 09 2014, @03:27PM

      by GreatAuntAnesthesia (3275) on Friday May 09 2014, @03:27PM (#41284) Journal

      > Maybe I should dig up one of said bugs and call it Pointergate

      Don't forget to draw it a cute little logo that the media can tack onto their articles.

    • (Score: 2) by Hairyfeet on Friday May 09 2014, @04:14PM

      by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Friday May 09 2014, @04:14PM (#41293) Journal

      The problem is what I call "zombie servers" which you'd be amazed how many of 'em are all over the net. For those that have never run into one a zombie server is one which hasn't had an admin for at least 6 months, these machines never get patched, never get messed with, yet are out there waiting to be pwned (and most already are).

      I first learned of the zombie servers back when I was doing hired gun for larger businesses, I'd take an inventory to see what I had to work with and it would never fail that I'd find some old box running that had just been forgotten, some had been an old email or file server that had been lost when they moved to a new service, sometimes it was a backend VPN or DB box that had been left behind when a project was canceled, in just about all the cases the ones who had set up the system was long gone.

      But you are gonna be seeing fallout from heartbleed for years because of the zombies, I've seen NT 3.5, ancient versions of RH and Derbian, I bet if somebody did a survey of what exactly is out there the amount of old zombie servers still responding to requests would be staggering. Its just what happens when a corp gets huge, things fall through the cracks.

      --
      ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
      • (Score: 3, Informative) by frojack on Friday May 09 2014, @05:01PM

        by frojack (1554) on Friday May 09 2014, @05:01PM (#41316) Journal

        Zombies are often NOT totally forgotten, just incredibly reliable.

        Netware was famous for this. I've found Netware servers running in my customers's sites that they were using every day for either data storage or print-server and had just assumed the work was actually being done on the brand new server the last contractor installed. He had only migrated mail and half the printers, and left file storage on the old box.

        When I took my company's last netware server down (because disk space was nearly exhausted) and replaced it with Linux many years ago, it had a uptime of 4 and a half years. I hated to shut it down.

        --
        No, you are mistaken. I've always had this sig.
        • (Score: 3, Informative) by Hairyfeet on Friday May 09 2014, @05:19PM

          by Hairyfeet (75) <bassbeast1968NO@SPAMgmail.com> on Friday May 09 2014, @05:19PM (#41322) Journal

          That isn't what I'm talking about frojack, totally different. What you are talking about is a classic "if it ain't broke" which if you want to go by that I know plenty of places with old WinNT and Win2K boxes (not on the net of course) that have been running some backend service for God knows how long without fail...if it ain't broke? DO NOT FIX IT.

          No frojack what I'm talking about is servers where the task they had to do has long since been moved to something else, its just that somewhere along the line somebody forgot to pull the plug on the old system so it just sits there waiting to be pwned. For a good example look at the backend of some of the parked domains, you'll see that many of them are on some ancient box that hasn't been used or patched in forever, it just sits there with the default "your site goes here" from like Apache 1. These systems were once upon a time useful but like that old WinNT email box I found they had moved to web hosted email years ago but someone must have said "we better leave this for a month, just in case something goes wrong with the new system" and then forgot about it. You look at the logs of a zombie and its NOT being used for this or that old application, its just gathering dust.

          --
          ACs are never seen so don't bother. Always ready to show SJWs for the racists they are.
    • (Score: 0) by Anonymous Coward on Friday May 09 2014, @06:52PM

      by Anonymous Coward on Friday May 09 2014, @06:52PM (#41352)

      But do we write articles continuously about that null dereference bug from 1986

      No but we write comments about Windows crashing because of the same thing (or lack of bounds checking, like this bug).

      But look, where were the "many eyes" in this? So many users of OpenSSL, so few actually combing through the code; code which also contains things like gotos jumping into if(0) and while(0) blocks. Remember HB Gary Federal, the 'security' company whose server was owned by a simple SQL injection? Except HBGF was just a money sink instead of anything actually security-related. Either way, it seems ironic that a group focused on security would have such a basic flaw sit untouched for so long. In this case, apparently nobody with a mouth bothered to look at the code and wonder why you could request anything longer (or shorter, even) than the string you send it.

      • (Score: 2) by NCommander on Friday May 09 2014, @07:57PM

        by NCommander (2) Subscriber Badge <michael@casadevall.pro> on Friday May 09 2014, @07:57PM (#41370) Homepage Journal

        THe thing is, SQL injections can be surprisingly hard to spot. I personally blame a lot of this on MySQL, which as the fisher price of databases made it close to impossible to use stored procedures, triggers, or any sort of database functionality without embedding the SQL directly into the application layer. Since MySQL proved to be exceedingly popular, there's an entire generation of devs who feel all SQL and shit should be in the application logic.

        Sanitizing SQL is not as straightforward as most people seem to believe, and a lot of apps seem to prefer writing their own sanitization code vs. using something pre-provided by the DB. Since there's a bunch of edge cases most people can miss, and it just takes missing one or two lines and boom, instant SQL injection.

        What surprises me is its not more common.

        --
        Still always moving
        • (Score: 2) by chromas on Saturday May 10 2014, @12:25AM

          by chromas (34) Subscriber Badge on Saturday May 10 2014, @12:25AM (#41442) Journal

          That's why god invented prepared statements. In-band signalling is the devil's work.

  • (Score: 1) by Freeman on Friday May 09 2014, @03:32PM

    by Freeman (732) on Friday May 09 2014, @03:32PM (#41285) Journal

    "How likely is it that we will need to repeat this cleanup effort?"

    Isn't that what a majority of IT time goes towards? The obvious answer should be, yes. It might not be a Huge, Critical, Oh Noes I lost Everything, bug, but there's always something.

    --
    Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 2, Insightful) by Zinho on Friday May 09 2014, @04:15PM

    by Zinho (759) on Friday May 09 2014, @04:15PM (#41294)

    Six years after the Conficker worm was first detected we still have ~1% of all Windows computers [microsoft.com] on the Internet still infected with it.

    Cleanup takes time, and (as stated by others) some system administrators have higher priorities than chasing down every reported exploit for their systems. These will be the same admins who don't apply patches to running systems because uptime is more important than staying on the bleeding edge - many of them even run Debian! Everyone's definition of "if it ain't broke" is just a little different...

    I will not be surprised if in several years' time there aren't still a hoard of servers out there still vulnerable, ranging from social media (nothing of value lost) to banking/medical (combination of apathy and incompetence).

    --
    "Space Exploration is not endless circles in low earth orbit." -Buzz Aldrin
  • (Score: 3, Insightful) by patric91 on Friday May 09 2014, @04:43PM

    by patric91 (2471) on Friday May 09 2014, @04:43PM (#41309)

    I consider myself a very technically sophisticated user and I'm concerned about this bug, but I don't loose any sleep over it.

    I have not installed any browser plug-ins to fight this, I have not gone on a wild password-changing-spree either.

    My reasoning is simple. It doesn't matter. I could change my passwords, but has the site been patched? No, then my password change was a complete waste of time. What if their system is patched? Great, I've avoided this vulnerability. I then end up at another site on another day and I pickup a different bug or virus or mal-whatever web-drive-by-zero-day and my machine is compromised and my (useless?) anti-virus is none the wiser. I then plug my thumb drive in and move the problem to my work network. Or one of my co-workers does, and then the bug travels back to my home computer. What if the malware authors are really good at what they do and the bug is now hiding in the firmware on my motherboard or NIC? Maybe it's a bad piece of code served up by an ad network, maybe it has nothing to due with my computer at all and they get my CC number from a fake swiper at the gas station.

    The only assumption that I work from is that my machine is compromised. I keep a good relationship with my local banker so that if there is a problem, and there has been in the past, they step in and, in essence, make it the insurance company's problem.

    This is not a perfect solution by any means, but I can't operate all of my computers from Live CDs and I refuse to do business by stone tablets, so here I find myself. Besides, if I don't watch out, it's going to be heart disease that gets me in the end, not a digital "virus".

    Just my two cents. Thanks for listening(reading).

    --
    Armchair Polymath
  • (Score: 1) by bill_mcgonigle on Saturday May 10 2014, @06:15AM

    by bill_mcgonigle (1105) on Saturday May 10 2014, @06:15AM (#41507)

    Ran updates, same as most bugs. One vendor-managed server doesn't have an update yet ("real soon, promise!") so the certificate there is the old one. The HTTPS connections to that machine all come from internal sources though, so external attacks aren't a huge risk (it does a different protocol to the world). All the others servers got a new certificate. Took the opportunity to upgrade to a real authority since 5-yr DV wildcard certs are just $150 now.

    The only real pain in the process is the wide disparity in TLS configuration in software. This daemon wants a separate file for cert, key, and chain, this one wants the cert and chain in one PEM, this other one wants them all in a PEM, this one wants the CA cert while this other one uses the mozilla certificate store, etc. I wound up making a symlink tree with each apps having a directory with its tls dependencies just to keep things sane.