Stories
Slash Boxes
Comments

SoylentNews is people

posted by CoolHand on Thursday April 30 2015, @11:12PM   Printer-friendly
from the we're-not-really-just-procrastinating-honest! dept.

The Register covers the difficulty of putting SHA-1 crypto algorithm to bed:

The road towards phasing out the ageing SHA-1 crypto hash function is likely to be littered with potholes, security experts warn.

SHA-1 is a hashing (one-way) function that converts information into a shortened "message digest", from which it is impossible to recover the original information. This hashing technique is used in digital signatures, verifying that the contents of software downloads have not been tampered with, and many other cryptographic applications.

The ageing SHA-1 protocol – published in 1995 – is showing its age and is no longer safe from Collision Attacks, a situation where two different blocks of input data throw up the same output hash. This is terminal for a hashing protocol, because it paves the way for hackers to offer manipulated content that carries the same hash value as pukka packets of data.

Certificate bodies and others are beginning to move on from SHA-1 to its replacement, SHA-2. Microsoft announced its intent to deprecate SHA-1 in Nov 2013. More recently, Google joined the push with a decision to make changes in he latest version of its browser, Chrome version 42, so that SHA-1 certificates are flagged up as potentially insecure.

Just updating to SHA-2 is not as simple as it might seem, because of compatibility issues with Android and Windows XP. More specifically, Android before 2.3 and XP before SP3 are incompatible with the change (a fuller compatibility matrix maintained by digital certificate firm GlobalSign can be found here).

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by sjames on Friday May 01 2015, @09:21AM

    by sjames (2882) on Friday May 01 2015, @09:21AM (#177411) Journal

    DNSSEC will be stuck for quite a while because it's an overly complex abomination to maintain. It made sense in 1995 (or at least it was the only practical way to do it) but in an era where a cellphone has more than enough CPU power to be a DNS server, it really doesn't make sense to jump through all of those hoops to avoid the server having to do encryption and hashing.

    IPv6 is slowly rolling out.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1) by Mike on Friday May 01 2015, @03:39PM

    by Mike (823) on Friday May 01 2015, @03:39PM (#177489)

    DNSSEC will be stuck for quite a while because it's an overly complex abomination to maintain. It made sense in 1995 (or at least it was the only practical way to do it) but in an era where a cellphone has more than enough CPU power to be a DNS server, it really doesn't make sense to jump through all of those hoops to avoid the server having to do encryption and hashing.

    Um, what?. I get the complexity issue, although I don't know a simpler solution. But what has DNSSEC to do with encryption? (other than to authenticate keys for other protocols) And how are, what I'll assume are DNSSEC supporting, 'servers' avoiding hashing now?

    • (Score: 2) by sjames on Friday May 01 2015, @06:29PM

      by sjames (2882) on Friday May 01 2015, @06:29PM (#177556) Journal

      Secure hashing and signing are a natural subset of encryption. A 'signature' is the process of hashing the signed record and encrypting it with a secret key such that only the public key can decrypt the hash so it can be verified against the record.

      Under DNSSEC, the zone files are processed to pre-sign them using the secret key matching the public key. Clients DO decrypt and verify the signature, but on the server side, one uses a bunch of opaque tools, at least in bind.

      A very little thought could probably have made it dirt simple. Have a look at Howto forge [howtoforge.com].

      Now, consider the actual problem. How do you make sure that the server that responds is the legitimate server? Why do all of the above when all that is actually needed is to issue a challenge encrypted with the registered public key and have it returned decrypted with your answer? Only the legitimate server could have decrypted your challenge. It sure makes things easier than having 3 keys PER ZONE on a server that may host thousands of zones.

      The other half of the equation is on the client side. The usual host lookup functions have no way to distinguish validated vs. unvalidated server or server failure (timed out) vs. failed validation. Caching servers likewise have no way to present the difference.

      • (Score: 1) by Mike on Friday May 01 2015, @10:03PM

        by Mike (823) on Friday May 01 2015, @10:03PM (#177638)

        Secure hashing and signing are a natural subset of
        encryption. A 'signature' is the process of hashing the signed record
        and encrypting it with a secret key such that only the public key can
        decrypt the hash so it can be verified against the record.

        Ah, I see why you used that term. In my experience, signing and
        hashing has been called authentication and not encryption. Encryption
        usually refers to changing data so that is unreadable by someone who
        can not decrypt it. Authentication indicates the data is authentic
        and no one has changed it. Anyone can read the signed/hashed data in
        transit, but with the, in this case public, key you can be sure that
        the data you are receiving is the data you were meant to receive by
        the private key's owner.

        Under DNSSEC, the zone files are processed to pre-sign them
        using the secret key matching the public key. Clients DO decrypt and
        verify the signature, but on the server side, one uses a bunch of
        opaque tools, at least in bind.

        A very little thought could probably have made it dirt simple. Have a
        look at Howto forge.

        Now, consider the actual problem. How do you make sure that the server
        that responds is the legitimate server? Why do all of the above when
        all that is actually needed is to issue a challenge encrypted with the
        registered public key and have it returned decrypted with your answer?
        Only the legitimate server could have decrypted your challenge. It
        sure makes things easier than having 3 keys PER ZONE on a server that
        may host thousands of zones.

        As a counter example with a man in the middle attack, the MITM would
        just change the decrypted return answer and the original requester
        would receive the wrong data.

        You'd also need someone way of looking up the 'registered' public key.
        It would probably end up looking very much like DNSSSEC. I.e. you'd
        have to start with 1+ basic certs, say a root certificate(s). And then
        you'd have to figure out which server you need to request DNS info
        from, say by looking for NS server from the root to the tld to the
        whatever-domain-level. Then you need to get the key for that server.
        And you need to do the lookup and retrieve the key in a secure way,
        i.e. sign everything, because you don't want to get pointed to the
        wrong server and get the wrong key. And then you'd have to request
        the data. And then you'd have to actually have the server sign the
        data, because otherwise someone could change the response on it's way
        back to the requester (see above). And we're pretty much back at
        DNSSEC.

        The other half of the equation is on the client side. The
        usual host lookup functions have no way to distinguish validated
        vs. unvalidated server or server failure (timed out) vs. failed
        validation. Caching servers likewise have no way to present the
        difference.

        It is definitely a problem to add information (e.g. validation states)
        to API's that were not designed for it. Unfortunately, the only
        solution is to add new API's that support the additional information.
        There are libraries that support DNSSEC information, but it is a lot
        of pain to change applications to use them.

        • (Score: 2) by sjames on Saturday May 02 2015, @12:44AM

          by sjames (2882) on Saturday May 02 2015, @12:44AM (#177698) Journal

          MITM

          The NS keys for a domain would need to be registered along with the IP address on the root server, but only for the name server, not for every domain it serves. The challenge hashed into the salt assures you you got the correct NS key.

          It is not uncommon for a single NS to server MANY domains (zones), typically you go from root (via hints) to gtld to domain's NS. Wouldn't it be handy to already have the keys for . and com. (for example) already cached? Likewise, if example.com and example.org both have NS ns1.example.net, then you just have to have the one key for the three domains to authenticate ns1. Also, when adding entries to the example.com zone file, no extra steps are required.

  • (Score: 2) by gnuman on Friday May 01 2015, @09:23PM

    by gnuman (5013) on Friday May 01 2015, @09:23PM (#177620)

    DNSSEC will be stuck for quite a while because it's an overly complex abomination to maintain. It made sense in 1995 (or at least it was the only practical way to do it) but in an era where a cellphone has more than enough CPU power to be a DNS server, it really doesn't make sense to jump through all of those hoops to avoid the server having to do encryption and hashing.

    I think you are missing how it works.

    1. KSK is set to parent zone
    2. ZSK is signed by KSK to sign the zone, so you can easily replace it without adding new records to parent.
    3. done?

    There is no reason why this can't be done dynamically, at least the ZSK signing the zone. That is not even the problem. Dynamic zone signing is part of BIND. The problem is lack of DNSSEC enabled resolvers.

    The point of having external keys for zone signing is I can sign a zone on a non-remote accessible computer and I can push it out to DNS. This makes DNS safer as a compromised DNS server cannot compromise the zone.

    • (Score: 2) by sjames on Saturday May 02 2015, @12:46AM

      by sjames (2882) on Saturday May 02 2015, @12:46AM (#177699) Journal

      As opposed to one key for the DNS server that supports an arbitrary number of zones.