Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday January 19 2017, @08:09AM   Printer-friendly
from the don't-let-the-door-hit-ya dept.

Arthur T Knackerbracket has found the following story:

For the past couple of years, browser makers have raced to migrate from SHA-1 to SHA-2 as researchers have intensified warnings about collision attacks moving from theoretical to practical. In just weeks, a transition deadline set by Google, Mozilla and Microsoft for the deprecation of SHA-1 is up.

Starting on Jan. 24, Mozilla's Firefox browser will be the first major browser to display a warning to its users who run into a site that doesn't support TLS certificates signed by the SHA-2 hashing algorithm. The move protects users from collision attacks, where two or more inputs generate the same hash value.

In 2012, Bruce Schneier projected a collision attack SHA-1 would cost $700,000 to perform by 2015 and $143,000 by 2018. In 2015, researchers said tweaks to existing attacks and new understanding of the algorithm could accelerate attacks and make a full-on collision attack feasible for somewhere between $75,000 to $125,000.

Experts warn the move [to] SHA-2 comes with a wide range of side effects; from unsupported applications, new hardware headaches tied to misconfigured equipment and cases of crippled credit card processing gear unable to communicate with backend servers. They say the entire process has been confusing and unwieldy to businesses dependent on a growing number of digital certificates used for not only their websites, but data centers, cloud services, and mobile apps.

[Continues...]

"SHA-1 deprecation in the context of the browser has been an unmitigated success. But it's just the tip of the SHA-2 migration iceberg. Most people are not seeing the whole problem," said Kevin Bocek, VP of security strategy and threat intelligence for Venafi, "SHA-1 isn't just a problem to solve by February, there are thousands more private certificates that will also need migrating."

Nevertheless, it's browsers that have been at the front lines of the SHA-1 to SHA-2 migration. And starting next month, public websites not supporting SHA-2 will generate various versions of ominous warnings cautioning users the site they are visiting is insecure.

[...] "The biggest excuse among web server operators was the need to support Internet Explorer on Windows XP (pre-SP3), which does not support SHA-2. However, websites with this requirement (including www.mozilla.org) have developed techniques that allow them to serve SHA-2 certificate to modern browsers while still providing a SHA-1 certificate to IE/XP clients," said J.C. Jones, cryptographic engineering manager at Mozilla.

Workarounds work for browsers, but different SHA-2 transition challenges persist within the mobile app space.

When a browser rejects a SHA-1 certificate, the warning message is easy to spot. That's not the case with apps. While Google's Android and Apple's iOS operating systems have supported SHA-2 for more than a year, most apps still do not.

[...] SHA-1 used by apps is a far cry from no protection. But still, the absence of SHA-2 introduces risk that someone could mint a forged SHA-1 certificate to connect with an app using a SHA-1 certificate. An attacker spoofing the DNS of a public Wi-Fi connection could launch a man-in-the-middle attack, and unlike with a browser, the use of untrusted TLS certificates would go unnoticed, Bocek said.

[...] "If your app relies on SHA-1 based certificate verification, then people may encounter broken experiences in your app if you fail to update it," said Adam Gross, a production engineer at Facebook.

Enterprises are also not under the same immediate pressure to update their internal PKI used for internal hardware, software and cloud applications. But security experts warn that doesn't make them immune to major certificate headaches. One of those hassles is the fact the number of certificates has ballooned to an average of more than 10,000 per company, which makes the switch from SHA-1 to SHA-2 a logistical nightmare, according to Venafi.

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Informative) by driverless on Thursday January 19 2017, @01:05PM

    by driverless (4770) on Thursday January 19 2017, @01:05PM (#456023)

    In theory it's not much more secure than the weaker of the two algorithms. In practice you need to select some pretty pathologically bad hashing algorithms for that to be the case. However, in cryptography, theory trumps practice, so it's not done.

    SSLv3 actually did this, it used a dual hash for everything. It was removed in TLS and replaced with just SHA-1 by itself because, in theory... well, see above.

    Starting Score:    1  point
    Moderation   +1  
       Informative=1, Total=1
    Extra 'Informative' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by termigator on Thursday January 19 2017, @03:39PM

    by termigator (4271) on Thursday January 19 2017, @03:39PM (#456076)

    > In theory it's not much more secure than the weaker of the two algorithms

    Could you elaborate more on this statement? I am far from understanding the math and details of cryptography, but if I hash a file with MD5, SHA-1, and SHA-256, how is the weakness of MD5 lessen the security of the other 2 algorithms? If an attacker puts in the effort to compromise the MD5 hash, how are the other two only providing a "little more security"?

    Does the effort in compromising the weaker assist in compromising the others?

    • (Score: 1, Informative) by Anonymous Coward on Thursday January 19 2017, @04:11PM

      by Anonymous Coward on Thursday January 19 2017, @04:11PM (#456087)

      If you hash sequentially then remember that output of function is the same for same input no matter what (this is what being a function mean). So if you find collision on one function others used after it will also "collide" because they will get the same input. When you use them in parallel and leave multiple fields with hashes then it is much stronger because you have to simultaneously find collisions on all of them. Of course we can't leave multiple fields for hashes because we live in the '60 and we don't have space, RAM and CPU for that one. We could make the first type of multiple hash function and NSA would approve it.
      How do we know that we live in the 60'? well because credit card companies never heard about asymmetric cryptography.

    • (Score: 3, Informative) by Anonymous Coward on Thursday January 19 2017, @05:49PM

      by Anonymous Coward on Thursday January 19 2017, @05:49PM (#456122)

      There are INDEPENDENT (hash = (MD5(data), SHA(data)) hashes and DEPENDENT (hash = MD5(SHA(data))) hashes. For independent hashes, the security is, at worst, the security of the strongest hashing algorithm. For dependent hashes, the security is, at best, the security level of the weakest hashing algorithm. I'll give you an example using two hashing algorithms: RCo1, which is a 2-bit counter of the number of 1s that have been seen in the input, and L3B, which is the last three bits fed in. The weaker of the two is RCo1.

      Now with independent hashes, it may appear that multiple hashes increase security, with the caveat that you actually have to check both hashes. However, because the search space for the strongest is so much larger than the weaker hash, by the time you find a collision in the stronger hash, you have found multiple ones for the weaker one. The effort to find the stronger one is so far and away larger, that the smaller doesn't really blip the radar much. Basically, you would use the RCo1 hash, take all the collisions you find and then double check them with L3B. Well, the chance of collision in that curated data is the same as a chance with the stronger hashing algorithm alone, and hence the security is, at worst, the security of the stronger algorithm. That is a gross over-simplification, to the point of being wrong in the details, but there are multiple papers on the subject: http://link.springer.com/chapter/10.1007%2F978-3-540-28628-8_19 [springer.com] is one of the first to propose the idea.

      With dependent hashes, the security is, at best, the security of the weakest algorithm. The reason is threefold: each hash reduces the search space, output interaction, and collisions cascade. So, with L3B(RCo1(data)), there are, at best, 2-bits of entropy because the fact that RCo1 has a 2-bit output means that there are only 2^2 possible inputs into the stronger one, meaning there is, by definition only 2^2 possible outputs. Similarly, with RCo1(L3B(data)) there are, at best, 2-bits of entropy out because of the strength of the output. However, as you can see collisions cascade, so in the former, as soon as you find one with RCo1, you have one for L3B, which means any weakness in one weakens the rest. Plus there is the interaction problem. If you actually check, the output of L3B in the second example ends up biasing RCo1 such that certain values are more common than others, which means the security is actually worse in the latter example than the weaker hash used alone. So, either cascade of hashes is weaker than the weakest hash alone.

      Long story short, using multiple dependent hashes not only not help, but can make things worse due to the way they interact; and using multiple independent hashes doesn't help because using multiple hashes doesn't really affect the search for a collision in the strongest one.

      • (Score: 0) by Anonymous Coward on Thursday January 19 2017, @07:17PM

        by Anonymous Coward on Thursday January 19 2017, @07:17PM (#456163)

        Long story short, using multiple dependent hashes not only not help, but can make things worse due to the way they interact; and using multiple independent hashes doesn't help because using multiple hashes doesn't really affect the search for a collision in the strongest one.

        Of course it's weaker if you do the completely retarded thing of hashing the hash results (dependent hashes).

        But for independent hashes how do you really know for sure which is the strongest hash? How does it not help if it turns out the strongest hash has a flaw? The flaw might not be complete break if it makes as weak or weaker than the other supposed weaker hashes then it's a good thing you also have those weaker hashes right?

        [tinfoil]So who is paying you to subtly mislead people?[/tinfoil] ;)

        • (Score: 0) by Anonymous Coward on Thursday January 19 2017, @09:17PM

          by Anonymous Coward on Thursday January 19 2017, @09:17PM (#456235)

          I'm not misleading people, I'm stating the facts. Some people just assume SHA-2 or whatever is secure enough and know that adding more doesn't help. Some people are paranoid (like Gentoo) and add more to help with the security. But it is still a mistake to say that multiple hashes are more secure than the baseline components in a theoretical sense.

          Really, it depends on what attacker you are protecting against and their resources as to whether multiple hashes would make a practical difference. For example, if someone gets a SHA-2 break and that is the only hash you use, you are screwed if they MITM your download as well. But, most hashes are gotten over TLS, so someone with a SHA-2 break could most likely MITM your connection by replacing the cert and replace the hashes used for verification with anything anyway.

          • (Score: 0) by Anonymous Coward on Saturday January 21 2017, @05:26PM

            by Anonymous Coward on Saturday January 21 2017, @05:26PM (#457021)
            Not as easy if the TLS is protected by multiple hashes.
      • (Score: 2) by driverless on Thursday January 19 2017, @11:15PM

        by driverless (4770) on Thursday January 19 2017, @11:15PM (#456285)

        Thanks for typing all that up, that's what I was referring to in my post but (a) didn't want to type a small essay and (b) was too lazy to go dig up all the refs. As I mentioned, in theory it's an issue, but you have to use pathologically broken hash algorithms in order to demonstrate it. In practice no-one has even come close to demonstrating multicollisions on combinations of even the weakest, most broken cryptographic hash functions.

        To prove my point: As a counterexample, someone demonstrate a multicollision on two badly broken hash functions, MD4 + MD5.

        • (Score: 0) by Anonymous Coward on Friday January 20 2017, @12:02AM

          by Anonymous Coward on Friday January 20 2017, @12:02AM (#456303)

          First note, "multicollison" is a term of art in the field of cryptography and means something different than your use here.

          But for your real point, you have to use badly broken algorithms to demonstrate it because people don't like reading complex math and won't take a person's word for it. Plus, people who do the research know that combining hashes doesn't protect you more, so why waste the effort in finding a collision with multiple algorithms? Although, given the we know Merkle–Damgård construction makes hashes highly susceptible to multicollisions (in the technical sense), herding and length extension, it might only take you a few weeks to find the necessary inputs to collide in multiple algorithms.

          Finally, it is not worth the time in the practicale sense of if you are targeting someone and get a break on the algorithm used in the certificate (which only uses one), you can just replace the hashes with whatever values you want when downloaded.

          • (Score: 2) by driverless on Friday January 20 2017, @04:49AM

            by driverless (4770) on Friday January 20 2017, @04:49AM (#456394)

            Ah, yeah, I mangled the usage of the term from Joux' Crypto '04 paper where he used multicollisions to analyse the security of concatenated hash functions, it's the analysis tool that was applied, not the attack itself.

            The second point you raise though is exactly what I was making in my example of theory vs. practice. The XSL attack was one example of this, if you recast AES as BES then you can attack that, but that doesn't extend to AES. Similarly, if you're having to use pathologically bad algorithms to demonstrate an attack works then you can't really claim it threatens a non-pathologically-bad one. Even in Joux paper, he ended his proof that concatenated hash functions weren't that secure by admitting that none of the real-world cases of concatenated hash functions that he could find were actually vulnerable to attack.