Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Tuesday June 02 2020, @12:50PM   Printer-friendly
from the about-time dept.

Dangerous SHA-1 crypto function will die in SSH linking millions of computers:

Developers of two open source code libraries for Secure Shell—the protocol millions of computers use to create encrypted connections to each other—are retiring the SHA-1 hashing algorithm, four months after researchers piled a final nail in its coffin.

The moves, announced in release notes and a code update for OpenSSH and libssh respectively, mean that SHA-1 will no longer be a means for digitally signing encryption keys that prevent the monitoring or manipulating of data passing between two computers connected by SSH—the common abbreviation for Secure Shell. (Wednesday's release notes concerning SHA-1 deprecation in OpenSSH repeated word for word what developers put in February release notes, but few people seemed to notice the planned change until now.)

Cryptographic hash functions generate a long string of characters that are known as a hash digest. Theoretically, the digests are supposed to be unique for every file, message, or other input fed into the function. Practically speaking, digest collisions must be mathematically infeasible given the performance capabilities of available computing resources. In recent years, a host of software and services have stopped using SHA-1 after researchers demonstrated practical ways for attackers to forge digital signatures that use SHA-1. The unanimous agreement among experts is that it's no longer safe in almost all security contexts.

"Its a chainsaw in a nursery," security researcher Kenn White said of the hash function, which made its debut in 1995.

[...] The final death knell for SHA-1 sounded in January, when researchers unveiled an even more powerful collision attack that cost as little as $45,000. Known as a chosen prefix collision, it allowed attackers to impersonate a target of their choosing, as was the case in the MD5 attack against Microsoft's infrastructure.

It was in this context that OpenSSH developers wrote in release notes published on Wednesday:

It is now possible to perform chosen-prefix attacks against the SHA-1 algorithm for less than USD$50K. For this reason, we will be disabling the "ssh-rsa" public key signature algorithm by default in a near-future release.

This algorithm is unfortunately still used widely despite the existence of better alternatives, being the only remaining public key signature algorithm specified by the original SSH RFCs.

[...] In an email, Gaëtan Leurent, an Inria France researcher and one of the co-authors of the January research, said he didn't expect OpenSSH developers to implement the deprecations quickly. He wrote:

When they completely disable SHA-1, it will become impossible to connect from a recent OpenSSH to a device with an old SSH server, but they will probably take gradual steps (with big warnings) before that. Also, embedded systems with an SSH access that have not been updated in many years probably have a lot of security issues, so maybe it's not too bad to disrupt them...

In any case, I am quite happy with this move, this is exactly what we wanted to achieve :-)


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by bzipitidoo on Tuesday June 02 2020, @03:11PM (7 children)

    by bzipitidoo (4388) on Tuesday June 02 2020, @03:11PM (#1002187) Journal

    Yeah, individual users are expected to let their computers automatically update, but organizations never want to "waste" effort on maintenance.

    Having done sysadmin work, I understand that you want to be very conservative about updating those servers. Never update just because a new version is available. Only update to fix an issue, and only if that issue is causing a problem or is at high risk of becoming a problem, soon. That latter category is the hardest-- you can't know the odds. Therefore it might seem best to just update everything, immediately. However, I've seen too many problems from updates themselves. The risk that an update causes a problem is also pretty high. Waiting reduces that risk greatly.

    For example, Apache 2.4.12 broke the crap out of data compression. The web site still worked, but suddenly twice as much data was being pushed out. We stayed with 2.4.11, then jumped to 2.4.13 when it was clear it had fixes for the data compression regressions of 2.4.12, and no major regressions itself. Had to-- 2.4.12 had introduced some other new feature we needed, but because of that regression, we opted to wait.

    Major version Linux kernel updates are risky. Moving from 5.6.14 to 5.6.15 is fairly safe, but moving to 5.7, no, bad idea. Distros themselves are quite conservative, keeping old kernel versions installed and bootable, even on minor version updates, in case anything goes wrong with a new version.

    When I see Android, iOS or Windows asking the user to accept a complete upgrade of the system, makes my teeth clench. The process can so easily brick that tablet or phone. I had an Android phone that came with version 5, and a promise that there would be an update to 6. 6 had a great feature I very much wanted, the ability to save voice mail. I had that for a few weeks after the upgrade to 6, then one day, after a minor update, it just vanished. Took me a while to understand what had happened. I thought at first I had simply misremembered where that feature was. They didn't gray it out either, so that you'd know you'd found the functionality even if you couldn't use it for some reason, no. They made it seem natural. Like not just amputating a finger, but also sculpting the hand so it looks perfectly natural that it has only 3 fingers and a thumb. I don't know if it was the manufacturer or the service provider who yanked back that feature, but it really infuriated me. I changed phones and service. I have yet to experience the printer update that disables 3rd party cartridges. But then, I no longer update my printer, for fear of just that.

    So, that's another big problem with updates: the dishonest update that isn't really an update. The vast majority of such offenses are committed with proprietary systems. A bad update is usually still an honest attempt at updating. Dishonest updates typically have as their goal extortion or exploitation of the users. Hardly better than the tire shop that spreads nails on the highway.

    Starting Score:    1  point
    Moderation   +3  
       Insightful=3, Total=3
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by DannyB on Tuesday June 02 2020, @03:56PM (4 children)

    by DannyB (5839) Subscriber Badge on Tuesday June 02 2020, @03:56PM (#1002198) Journal

    I understand that you want to be very conservative about updating those servers. Never update just because a new version is available. Only update to fix an issue

    I faced this issue with small unimportant servers during the 2000s. About a decade ago I had to deal with a new application server (that I built).

    (Aside: tech stack on server is Java / Apache Tomcat, many Java libraries. While this is proven to run on Linux, the powers that be put it on Windows because CIT maintains that.)

    I began to face two competing conflicting demands.
    1. I absolutely want to be conservative about updating money making production servers. (many thousands of logins)
    2. I absolutely DO want to keep things up to date and not fall behind, into bit rot and decay, and incur huge technical debt.

    Several things helped.

    First realize that all developers have a staging / testing environment to work with. Some developers additionally have a separate production environment which customers use but is separate from the testing environment. (re-read that if you don't get the joke.)

    Do keep things up to date. But conservatively and in a controlled manner. Things should be constantly being tested. (QA department, but I also use the code myself as I test it, build new features, etc.) Thus if a new OS update creates a problem, you know about it right away (unlikely to happen in my experience). If a new Java or a new Tomcat update causes a problem you know about it right away. (Unlikely, but it has happened.) Knowing about the problem allows you to fix it long before it goes to production. Also third party library upgrades can but don't usually cause problems.

    The biggest insights for production:

    I do the OS updates, regularly. But I defer the actually rebooting until a time of my choosing. Usually scheduled in the wee AM hours when everyone within our customer base is asleep. This is rare. It has never caused a problem. A major thing that made this possible was when the database connection pool library was able to deal with being disconnected from and reconnected to the database server.

    Make the application software updates fast.

    The application, the Java runtime, and the Tomcat server runtime are or can be updated together using a single short script. Total downtime about 30 seconds or less. It's a tried and true procedure at this point. And was tested on the staging server. Also tested on 12 local office servers frequently as daily build updates are applied. Basically the script that upgrades any combination of App / Java / Tomcat, is known to work. Any changes get tested well in advance.

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    • (Score: 2) by bzipitidoo on Tuesday June 02 2020, @05:08PM (3 children)

      by bzipitidoo (4388) on Tuesday June 02 2020, @05:08PM (#1002236) Journal

      > I defer the actually rebooting

      That too has risks. I've had Firefox crash because a library it was using was updated.

      I forgot to mention that the ease of rolling back is of course an important consideration. Is going back and forth as easy as toggling a switch and getting a near instantaneous response, or will it take minutes, or hours, or be impossible because there aren't enough resources to keep the old version, the update is too big?

      A fun surprise is the time bomb. Everything seems okay, have a long uptime, but for weeks, the machine has been unable to boot thanks to some problem that crept in, and no one realized. Then comes the day you have to reboot for some reason or other, probably a vital update, and the machine doesn't come back up. Something did the equivalent of "dd if=/dev/zero of=/dev/sda bs=512 count=1" at some point. In one of those cases, the problem was that two machines had been given the same IP address. All seemed okay, until one day, the wrong machine got the connection. I did not have to make a trip to the data center, because they'd each been given 2 IP addresses, and I was able to login remotely on the other addresses and find and fix the problem.

      Still another fun time bomb one was the machine that was being overtaxed and was slowly falling behind. It was supposed to generate web site statistics every day, but the process had reached the point it was taking slightly longer than a day for it to complete. Took me 2 weeks to get control of that situation, since they very much wanted the problem fixed with minimal disruption, so I had to let the process keep running while I nibbled at the edges, gradually introducing more efficient subprocesses, and slowly transporting the data to another machine for a week's worth of processing. The basic problem was that the original process was simply processing all the data since the beginning, every day. Chew through 100 days of data on the 100th day, 200 days worth of data on the 200th day, etc. Totally unnecessary to grind through the old data over and over like that, but the guy who did it was, as usual, under lots of pressure to get it up and running ASAP. Once I got the change in place to process only yesterday's data and add it to the existing work of previous days, suddenly that server had all this free processing time available, as one day's worth of data took only 5 minutes.

      • (Score: 2) by DannyB on Tuesday June 02 2020, @06:12PM (2 children)

        by DannyB (5839) Subscriber Badge on Tuesday June 02 2020, @06:12PM (#1002274) Journal

        I've had Firefox crash because a library it was using was updated.

        That is an interesting problem, but I don't run Firefox on servers. That problem could happen to any program if underlying libraries are replaced under your nose.

        I forgot to mention that the ease of rolling back is of course an important consideration.

        Yes.

        I mostly think in terms of rolling back Java, Tomcat or some third party library. But this type of problem is caught long before going to production. I would be surprised if an OS upgrade could cause a problem, since Java is a layer in between, and Java is "almost" an OS. Everything ultimately goes through Java. File I/O, Network I/O, Threads. That's why it works on very different OSes.

        I'm thankful that in ten years I manage to keep everything up to date, not bleeding edge, but not too far behind leading edge. Letting some things get way out of date is a ticking time bomb. You go to upgrade it leapfrogging multiple versions, and have major problems. Even if those problems are not experienced in production.

        Basically: testing, and production doesn't get new things that haven't been tested.

        --
        To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
        • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @10:17PM (1 child)

          by Anonymous Coward on Tuesday June 02 2020, @10:17PM (#1002428)

          That problem could happen to any program if underlying libraries are replaced under your nose.

          Open if the application uses dlopen() or the OS is immature. On mature operating systems, linked shared libraries are kept open while the application is running, and that link is maintained even if the filesystem entry gets removed (that's why it's called unlink, not delete).

          • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @10:51PM

            by Anonymous Coward on Tuesday June 02 2020, @10:51PM (#1002472)

            And? Things like Firefox and Apache and other daemons, to name a few, use multiple processes working in concert in a single program. If different processes of the program start at different times, then you can easily run into the situation where they are using different versions of the same library. It isn't uncommon that those processes end up conflicting or crashing.

  • (Score: 2) by RS3 on Tuesday June 02 2020, @05:05PM

    by RS3 (6367) on Tuesday June 02 2020, @05:05PM (#1002234)

    The worst, to me anyway, are bundled update packages that bring critical fixes, but also remove features, brick your 3rd-party ink, etc.

    Not sure what the fix is, but it seems we need more laws (ugh, yuck). Much of the problem stems from the attitude by corporations that you don't own the software, or even hardware more and more (John Deere, for one), that you're licensing its use, and they can change anything they want to at any time.

    Right to Repair needs to include unbundled software updates.

  • (Score: 2) by darkfeline on Tuesday June 02 2020, @10:46PM

    by darkfeline (1030) on Tuesday June 02 2020, @10:46PM (#1002462) Homepage

    I can't speak for Windows (and it shows in their track record), but both Android and iOS do extensive testing. Think datacenter rooms filled with test devices all having new builds of Android/iOS constantly being installed on them and running integration tests. It helps guarantee a baseline of stability.

    If you had racks of servers constantly being reimaged and integration tested with new versions of software, you'd feel pretty good about updates too.

    --
    Join the SDF Public Access UNIX System today!