Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Tuesday June 02 2020, @12:50PM   Printer-friendly
from the about-time dept.

Dangerous SHA-1 crypto function will die in SSH linking millions of computers:

Developers of two open source code libraries for Secure Shell—the protocol millions of computers use to create encrypted connections to each other—are retiring the SHA-1 hashing algorithm, four months after researchers piled a final nail in its coffin.

The moves, announced in release notes and a code update for OpenSSH and libssh respectively, mean that SHA-1 will no longer be a means for digitally signing encryption keys that prevent the monitoring or manipulating of data passing between two computers connected by SSH—the common abbreviation for Secure Shell. (Wednesday's release notes concerning SHA-1 deprecation in OpenSSH repeated word for word what developers put in February release notes, but few people seemed to notice the planned change until now.)

Cryptographic hash functions generate a long string of characters that are known as a hash digest. Theoretically, the digests are supposed to be unique for every file, message, or other input fed into the function. Practically speaking, digest collisions must be mathematically infeasible given the performance capabilities of available computing resources. In recent years, a host of software and services have stopped using SHA-1 after researchers demonstrated practical ways for attackers to forge digital signatures that use SHA-1. The unanimous agreement among experts is that it's no longer safe in almost all security contexts.

"Its a chainsaw in a nursery," security researcher Kenn White said of the hash function, which made its debut in 1995.

[...] The final death knell for SHA-1 sounded in January, when researchers unveiled an even more powerful collision attack that cost as little as $45,000. Known as a chosen prefix collision, it allowed attackers to impersonate a target of their choosing, as was the case in the MD5 attack against Microsoft's infrastructure.

It was in this context that OpenSSH developers wrote in release notes published on Wednesday:

It is now possible to perform chosen-prefix attacks against the SHA-1 algorithm for less than USD$50K. For this reason, we will be disabling the "ssh-rsa" public key signature algorithm by default in a near-future release.

This algorithm is unfortunately still used widely despite the existence of better alternatives, being the only remaining public key signature algorithm specified by the original SSH RFCs.

[...] In an email, Gaëtan Leurent, an Inria France researcher and one of the co-authors of the January research, said he didn't expect OpenSSH developers to implement the deprecations quickly. He wrote:

When they completely disable SHA-1, it will become impossible to connect from a recent OpenSSH to a device with an old SSH server, but they will probably take gradual steps (with big warnings) before that. Also, embedded systems with an SSH access that have not been updated in many years probably have a lot of security issues, so maybe it's not too bad to disrupt them...

In any case, I am quite happy with this move, this is exactly what we wanted to achieve :-)


Original Submission

Related Stories

Timeline to Remove DSA Support from OpenSSH 3 comments

OpenSSH developer, Damien Miller, has announced plans to remove support for DSA keys from OpenSSH in the near future. His announcement describes the rationale, process, and proposed timeline.

The next release of OpenSSH (due around 2024/03) will make DSA optional at compile time, but still enable it by default. Users and downstream distributors of OpenSSH may use this option to explore the impact of DSA removal in their environments, or to hard-deprecate it early if they desire.

Around 2024/06, a release of OpenSSH will change this compile-time default to disable DSA. It may still be enabled by users/distributors if needed.

Finally, in the first OpenSSH release after 2025/01/01 the DSA code will be removed entirely.

In summary:

2024/01 - this announcement
2024/03 (estimated) - DSA compile-time optional, enabled by default
2024/06 (estimated) - DSA compile-time optional, *disabled* by default
2025/01 (estimated) - DSA is removed from OpenSSH

Very few will notice this change. However, for those few to whom this matters the effects are major.

Previously:
(2021) scp Will Be Replaced With sftp Soon
(2020) SHA-1 to be Disabled in OpenSSH and libssh
(2019) How SSH Key Shielding Works
(2016) Upgrade Your SSH Keys
(2014) OpenSSH No Longer has to Depend on OpenSSL


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @01:05PM (15 children)

    by Anonymous Coward on Tuesday June 02 2020, @01:05PM (#1002157)

    Seemed the only thing that would get people off SHA1 was completely disabling it.

    • (Score: 4, Insightful) by PiMuNu on Tuesday June 02 2020, @02:39PM (6 children)

      by PiMuNu (3823) on Tuesday June 02 2020, @02:39PM (#1002181)

      > Seemed the only thing that would get people off SHA1 was completely disabling it.

      On the other hand, if a user needs to connect to a legacy system, where updating to a more recent openssh implementation is not feasible, then forcing that user to install and run an old version of openssh (presumably unpatched) can make the world less secure.

      • (Score: 1, Touché) by Anonymous Coward on Tuesday June 02 2020, @04:34PM (1 child)

        by Anonymous Coward on Tuesday June 02 2020, @04:34PM (#1002214)

        Not as insecure as leaving the legacy system connected to the internet with an old and probably already owned ssh.

        • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @10:09PM

          by Anonymous Coward on Tuesday June 02 2020, @10:09PM (#1002421)

          And what part in the GP's comment led you to the conclusion that the legacy system was internet-connected?

      • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @05:32PM (3 children)

        by Anonymous Coward on Tuesday June 02 2020, @05:32PM (#1002249)

        On the other hand, if a user needs to connect to a legacy system, where updating to a more recent openssh implementation is not feasible, then forcing that user to install and run an old version of openssh (presumably unpatched) can make the world less secure.

        I'm sorry, but no. You're conflating both the client-side with the server-side, and your comment seems to imply more of an argument for the security of the connecting system.

        If you need to connect to a legacy system, then you can still keep a separate standalone ssh client binary in userspace (named something different) on your system simply for connecting to that "legacy system" without having to compromise the server side (sshd) security of your client machine. All other outbound client connections to non-legacy systems would use your default patched and maintained ssh client. As another commenter already pointed out, presumably the legacy system wouldn't be exposed to the open internet, and even internally there should be additional restrictions.

        Nobody is 'forcing anyone to install and run an old version' of your sshd on your client system.

        • (Score: 2) by sjames on Tuesday June 02 2020, @08:49PM

          by sjames (2882) on Tuesday June 02 2020, @08:49PM (#1002346) Journal

          The correct solution is to make the client prefer something better than SHA1 if available, and if not, use SHA1 and issue a dire warning. On the server side, no longer offer SHA1 as an option. That way, I can use the new and maintained ssh client for everything including the embedded device with no hope for a firmware update and still the sshd won't be exposing me to any unnecessary risks.

          It absolutely is a bad thing to disable an old embedded device with no firmware updates, especially if it's been put on a firewalled internal network (or a private admin VLAN). It may be a long way away and hard to access.

        • (Score: 1, Insightful) by Anonymous Coward on Tuesday June 02 2020, @10:43PM (1 child)

          by Anonymous Coward on Tuesday June 02 2020, @10:43PM (#1002459)

          On the other hand, if a user needs to connect to a legacy system, where updating to a more recent openssh implementation is not feasible, then forcing that user to install and run an old version of openssh (presumably unpatched) can make the world less secure.

          I'm sorry, but no. You're conflating both the client-side with the server-side, and your comment seems to imply more of an argument for the security of the connecting system.

          If you need to connect to a legacy system, then you can still keep a separate standalone ssh client binary in userspace (named something different) on your system simply for connecting to that "legacy system" without having to compromise the server side (sshd) security of your client machine.

          All this is moot because nobody removed SHA-1 support from OpenSSH.

          If you want to connect to an old server with no support for anything else you can simply use the latest version and enable SHA-1 when connecting to that server, either on the command line or in your ~/.ssh/config file.

          • (Score: 0) by Anonymous Coward on Wednesday June 03 2020, @02:00AM

            by Anonymous Coward on Wednesday June 03 2020, @02:00AM (#1002544)

            Indeed

            https://www.openssh.com/releasenotes.html [openssh.com]

            To check whether a server is using the weak ssh-rsa public key algorithm, for host authentication, try to connect to it after removing the ssh-rsa algorithm from ssh(1)'s allowed list:

                    ssh -oHostKeyAlgorithms=-ssh-rsa user@host

            If the host key verification fails and no other supported host key types are available, the server software on that host should be upgraded.

    • (Score: 5, Insightful) by bzipitidoo on Tuesday June 02 2020, @03:11PM (7 children)

      by bzipitidoo (4388) on Tuesday June 02 2020, @03:11PM (#1002187) Journal

      Yeah, individual users are expected to let their computers automatically update, but organizations never want to "waste" effort on maintenance.

      Having done sysadmin work, I understand that you want to be very conservative about updating those servers. Never update just because a new version is available. Only update to fix an issue, and only if that issue is causing a problem or is at high risk of becoming a problem, soon. That latter category is the hardest-- you can't know the odds. Therefore it might seem best to just update everything, immediately. However, I've seen too many problems from updates themselves. The risk that an update causes a problem is also pretty high. Waiting reduces that risk greatly.

      For example, Apache 2.4.12 broke the crap out of data compression. The web site still worked, but suddenly twice as much data was being pushed out. We stayed with 2.4.11, then jumped to 2.4.13 when it was clear it had fixes for the data compression regressions of 2.4.12, and no major regressions itself. Had to-- 2.4.12 had introduced some other new feature we needed, but because of that regression, we opted to wait.

      Major version Linux kernel updates are risky. Moving from 5.6.14 to 5.6.15 is fairly safe, but moving to 5.7, no, bad idea. Distros themselves are quite conservative, keeping old kernel versions installed and bootable, even on minor version updates, in case anything goes wrong with a new version.

      When I see Android, iOS or Windows asking the user to accept a complete upgrade of the system, makes my teeth clench. The process can so easily brick that tablet or phone. I had an Android phone that came with version 5, and a promise that there would be an update to 6. 6 had a great feature I very much wanted, the ability to save voice mail. I had that for a few weeks after the upgrade to 6, then one day, after a minor update, it just vanished. Took me a while to understand what had happened. I thought at first I had simply misremembered where that feature was. They didn't gray it out either, so that you'd know you'd found the functionality even if you couldn't use it for some reason, no. They made it seem natural. Like not just amputating a finger, but also sculpting the hand so it looks perfectly natural that it has only 3 fingers and a thumb. I don't know if it was the manufacturer or the service provider who yanked back that feature, but it really infuriated me. I changed phones and service. I have yet to experience the printer update that disables 3rd party cartridges. But then, I no longer update my printer, for fear of just that.

      So, that's another big problem with updates: the dishonest update that isn't really an update. The vast majority of such offenses are committed with proprietary systems. A bad update is usually still an honest attempt at updating. Dishonest updates typically have as their goal extortion or exploitation of the users. Hardly better than the tire shop that spreads nails on the highway.

      • (Score: 2) by DannyB on Tuesday June 02 2020, @03:56PM (4 children)

        by DannyB (5839) Subscriber Badge on Tuesday June 02 2020, @03:56PM (#1002198) Journal

        I understand that you want to be very conservative about updating those servers. Never update just because a new version is available. Only update to fix an issue

        I faced this issue with small unimportant servers during the 2000s. About a decade ago I had to deal with a new application server (that I built).

        (Aside: tech stack on server is Java / Apache Tomcat, many Java libraries. While this is proven to run on Linux, the powers that be put it on Windows because CIT maintains that.)

        I began to face two competing conflicting demands.
        1. I absolutely want to be conservative about updating money making production servers. (many thousands of logins)
        2. I absolutely DO want to keep things up to date and not fall behind, into bit rot and decay, and incur huge technical debt.

        Several things helped.

        First realize that all developers have a staging / testing environment to work with. Some developers additionally have a separate production environment which customers use but is separate from the testing environment. (re-read that if you don't get the joke.)

        Do keep things up to date. But conservatively and in a controlled manner. Things should be constantly being tested. (QA department, but I also use the code myself as I test it, build new features, etc.) Thus if a new OS update creates a problem, you know about it right away (unlikely to happen in my experience). If a new Java or a new Tomcat update causes a problem you know about it right away. (Unlikely, but it has happened.) Knowing about the problem allows you to fix it long before it goes to production. Also third party library upgrades can but don't usually cause problems.

        The biggest insights for production:

        I do the OS updates, regularly. But I defer the actually rebooting until a time of my choosing. Usually scheduled in the wee AM hours when everyone within our customer base is asleep. This is rare. It has never caused a problem. A major thing that made this possible was when the database connection pool library was able to deal with being disconnected from and reconnected to the database server.

        Make the application software updates fast.

        The application, the Java runtime, and the Tomcat server runtime are or can be updated together using a single short script. Total downtime about 30 seconds or less. It's a tried and true procedure at this point. And was tested on the staging server. Also tested on 12 local office servers frequently as daily build updates are applied. Basically the script that upgrades any combination of App / Java / Tomcat, is known to work. Any changes get tested well in advance.

        --
        People today are educated enough to repeat what they are taught but not to question what they are taught.
        • (Score: 2) by bzipitidoo on Tuesday June 02 2020, @05:08PM (3 children)

          by bzipitidoo (4388) on Tuesday June 02 2020, @05:08PM (#1002236) Journal

          > I defer the actually rebooting

          That too has risks. I've had Firefox crash because a library it was using was updated.

          I forgot to mention that the ease of rolling back is of course an important consideration. Is going back and forth as easy as toggling a switch and getting a near instantaneous response, or will it take minutes, or hours, or be impossible because there aren't enough resources to keep the old version, the update is too big?

          A fun surprise is the time bomb. Everything seems okay, have a long uptime, but for weeks, the machine has been unable to boot thanks to some problem that crept in, and no one realized. Then comes the day you have to reboot for some reason or other, probably a vital update, and the machine doesn't come back up. Something did the equivalent of "dd if=/dev/zero of=/dev/sda bs=512 count=1" at some point. In one of those cases, the problem was that two machines had been given the same IP address. All seemed okay, until one day, the wrong machine got the connection. I did not have to make a trip to the data center, because they'd each been given 2 IP addresses, and I was able to login remotely on the other addresses and find and fix the problem.

          Still another fun time bomb one was the machine that was being overtaxed and was slowly falling behind. It was supposed to generate web site statistics every day, but the process had reached the point it was taking slightly longer than a day for it to complete. Took me 2 weeks to get control of that situation, since they very much wanted the problem fixed with minimal disruption, so I had to let the process keep running while I nibbled at the edges, gradually introducing more efficient subprocesses, and slowly transporting the data to another machine for a week's worth of processing. The basic problem was that the original process was simply processing all the data since the beginning, every day. Chew through 100 days of data on the 100th day, 200 days worth of data on the 200th day, etc. Totally unnecessary to grind through the old data over and over like that, but the guy who did it was, as usual, under lots of pressure to get it up and running ASAP. Once I got the change in place to process only yesterday's data and add it to the existing work of previous days, suddenly that server had all this free processing time available, as one day's worth of data took only 5 minutes.

          • (Score: 2) by DannyB on Tuesday June 02 2020, @06:12PM (2 children)

            by DannyB (5839) Subscriber Badge on Tuesday June 02 2020, @06:12PM (#1002274) Journal

            I've had Firefox crash because a library it was using was updated.

            That is an interesting problem, but I don't run Firefox on servers. That problem could happen to any program if underlying libraries are replaced under your nose.

            I forgot to mention that the ease of rolling back is of course an important consideration.

            Yes.

            I mostly think in terms of rolling back Java, Tomcat or some third party library. But this type of problem is caught long before going to production. I would be surprised if an OS upgrade could cause a problem, since Java is a layer in between, and Java is "almost" an OS. Everything ultimately goes through Java. File I/O, Network I/O, Threads. That's why it works on very different OSes.

            I'm thankful that in ten years I manage to keep everything up to date, not bleeding edge, but not too far behind leading edge. Letting some things get way out of date is a ticking time bomb. You go to upgrade it leapfrogging multiple versions, and have major problems. Even if those problems are not experienced in production.

            Basically: testing, and production doesn't get new things that haven't been tested.

            --
            People today are educated enough to repeat what they are taught but not to question what they are taught.
            • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @10:17PM (1 child)

              by Anonymous Coward on Tuesday June 02 2020, @10:17PM (#1002428)

              That problem could happen to any program if underlying libraries are replaced under your nose.

              Open if the application uses dlopen() or the OS is immature. On mature operating systems, linked shared libraries are kept open while the application is running, and that link is maintained even if the filesystem entry gets removed (that's why it's called unlink, not delete).

              • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @10:51PM

                by Anonymous Coward on Tuesday June 02 2020, @10:51PM (#1002472)

                And? Things like Firefox and Apache and other daemons, to name a few, use multiple processes working in concert in a single program. If different processes of the program start at different times, then you can easily run into the situation where they are using different versions of the same library. It isn't uncommon that those processes end up conflicting or crashing.

      • (Score: 2) by RS3 on Tuesday June 02 2020, @05:05PM

        by RS3 (6367) on Tuesday June 02 2020, @05:05PM (#1002234)

        The worst, to me anyway, are bundled update packages that bring critical fixes, but also remove features, brick your 3rd-party ink, etc.

        Not sure what the fix is, but it seems we need more laws (ugh, yuck). Much of the problem stems from the attitude by corporations that you don't own the software, or even hardware more and more (John Deere, for one), that you're licensing its use, and they can change anything they want to at any time.

        Right to Repair needs to include unbundled software updates.

      • (Score: 2) by darkfeline on Tuesday June 02 2020, @10:46PM

        by darkfeline (1030) on Tuesday June 02 2020, @10:46PM (#1002462) Homepage

        I can't speak for Windows (and it shows in their track record), but both Android and iOS do extensive testing. Think datacenter rooms filled with test devices all having new builds of Android/iOS constantly being installed on them and running integration tests. It helps guarantee a baseline of stability.

        If you had racks of servers constantly being reimaged and integration tested with new versions of software, you'd feel pretty good about updates too.

        --
        Join the SDF Public Access UNIX System today!
  • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @01:19PM (4 children)

    by Anonymous Coward on Tuesday June 02 2020, @01:19PM (#1002160)

    I don't get it.

    • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @03:42PM (1 child)

      by Anonymous Coward on Tuesday June 02 2020, @03:42PM (#1002195)

      For the same reason anyone does anything potentially risky, because they've looked at the risks, though 'fuck it, we know the issues, we're deeming the potential risks to be acceptable...'

      humans, eh?

      • (Score: 2) by DannyB on Tuesday June 02 2020, @06:13PM

        by DannyB (5839) Subscriber Badge on Tuesday June 02 2020, @06:13PM (#1002275) Journal

        We can just wear masks and practice social distancing from MD5.

        --
        People today are educated enough to repeat what they are taught but not to question what they are taught.
    • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @10:53PM

      by Anonymous Coward on Tuesday June 02 2020, @10:53PM (#1002474)

      Here is one to make you shudder: I know legacy systems in use today that use MD4.

    • (Score: 0) by Anonymous Coward on Friday June 05 2020, @09:07AM

      by Anonymous Coward on Friday June 05 2020, @09:07AM (#1003643)

      md5 has the benefit of hashing really fast even on old hardware. That makes it great for going through large checksum listings really fast, and the odds of duplicate hashes are pretty low (unlike md4, sha0 and earlier where you could actually find duplicate checksums in listings of files on your own system back in the 1-4GB hard disk days.

      Using old checksums by itself isn't bad. It's this assumption that we should ONLY use one checksum that is. If you did interwoven checksumming with multiple checksums iterating over the same block of data at the same time (so it's in the cache) then checksum performance would be negligable compared to duplicated i/o access. And if you use different checksum models on the same data then it becomes mathematically unlikely that the same checksum groups will show up for both the original and modified files, especially when you include file size with them. The point at which they do is when the file size exceeds the effective maximum unique size of that checksum algorithm for the number of characters it outputs. At that point you MUST move to a larger checksum format if you still want non-duplicate hashes in all circumstances.

      I still don't get why more organizations don't do this. It helps ensure even if a hash algorithm is compromised that the data remains verifiable and secure, and it reduces the likelihood of even an effective collision attack being undetectable.

  • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @04:07PM (2 children)

    by Anonymous Coward on Tuesday June 02 2020, @04:07PM (#1002204)

    S/t.

    • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @04:19PM

      by Anonymous Coward on Tuesday June 02 2020, @04:19PM (#1002209)

      SHA2 or SHA3?

    • (Score: 1) by DECbot on Tuesday June 02 2020, @04:55PM

      by DECbot (832) on Tuesday June 02 2020, @04:55PM (#1002224) Journal

      Obviously we should leave this up to Congress and law enforcement to decide. ROT13 should be perfectly acceptable for everyone. You can even do it a dozen times if you really want to ensure it is encrypted to FBI standards for individual persons (while maintaining the appropriate LE backdoors), i.e. ROT13 Home Edition. Though for your corporate systems, the standard is to perform a baker's dozen to get that extra level of professional encryption, ROT13 Pro.

      --
      cats~$ sudo chown -R us /home/base
  • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @04:58PM (1 child)

    by Anonymous Coward on Tuesday June 02 2020, @04:58PM (#1002227)

    What sort of security researcher would advocate using power tools around helpless babies?

    • (Score: 0) by Anonymous Coward on Tuesday June 02 2020, @10:01PM

      by Anonymous Coward on Tuesday June 02 2020, @10:01PM (#1002412)

      Maybe the other kind of nursery, where they grow trees and plants?

(1)