The Register covers the difficulty of putting SHA-1 crypto algorithm to bed:
The road towards phasing out the ageing SHA-1 crypto hash function is likely to be littered with potholes, security experts warn.
SHA-1 is a hashing (one-way) function that converts information into a shortened "message digest", from which it is impossible to recover the original information. This hashing technique is used in digital signatures, verifying that the contents of software downloads have not been tampered with, and many other cryptographic applications.
The ageing SHA-1 protocol – published in 1995 – is showing its age and is no longer safe from Collision Attacks, a situation where two different blocks of input data throw up the same output hash. This is terminal for a hashing protocol, because it paves the way for hackers to offer manipulated content that carries the same hash value as pukka packets of data.
Certificate bodies and others are beginning to move on from SHA-1 to its replacement, SHA-2. Microsoft announced its intent to deprecate SHA-1 in Nov 2013. More recently, Google joined the push with a decision to make changes in he latest version of its browser, Chrome version 42, so that SHA-1 certificates are flagged up as potentially insecure.
Just updating to SHA-2 is not as simple as it might seem, because of compatibility issues with Android and Windows XP. More specifically, Android before 2.3 and XP before SP3 are incompatible with the change (a fuller compatibility matrix maintained by digital certificate firm GlobalSign can be found here).
(Score: 2) by kaszz on Thursday April 30 2015, @11:26PM
What is the problem? Just have a field for which protocol that is used? If it's some retarded system design.. well suffer!
(Score: 5, Informative) by anubi on Friday May 01 2015, @03:04AM
The problem boils down to the SHA-1 hash was supposed be a number ( SHA-1 produces a 160-bit (20-byte) hash value. A SHA-1 hash value is typically rendered as a hexadecimal number, 40 digits long ) that was calculated from the body of the object it was supposed to verify the integrity of. Anyone subsequently running a SHA-1 on that same block of data ( think installation package or big dataset ) would get the same number. If anything at all was monkeyed with, it would not come up with the same number, and it would be obvious that something in that package wasn't right.
If someone posted the dataset and its hash in public, you could run that dataset through your own hash generator and see the same number show up. You then had pretty good assurance there were no insertions/deletions in your download - and that you were indeed getting what the sender was intending to send.
Problem was someone figured out how to make the hash come out to whatever number he wanted it to. ( collision ). Now, two different things have the same hash.
Really bad news.
Now, somebody can get the package, monkey with it, fix it so it hashes the the same number, put it up, and it looks like the real thing to everyone else who downloads it.
For highly secure use, it immediately became kinda useless. You still do not know if you got the real thing or one that's been monkeyed with.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
(Score: 2) by kaszz on Friday May 01 2015, @03:49PM
So someone can fake the hash value. That won't prevent properly designed metadata from accommodating a SHA-2 value. That older systems lack SHA-2 capability is something the administrator of those systems have to decide if they prefer security or keep running.
(Score: 3, Informative) by draconx on Friday May 01 2015, @06:27PM
Strictly speaking, what you describe in the first sentence is a called finding a preimage, which is different from finding a collision. A collision is simply two different inputs that hash to the same value (importantly: it does not matter what the colliding hash value is).
We want cryptographic hash functions to have three desirable properties:
Note that collisions resistance is strictly a stronger condition than second preimage resistance: if your hash is collision resistant, then it is also second preimage resistant. But the reverse is not necessarily true.
By "infeasible" we mean "you should not be able to do better than guessing randomly", which for the first two (preimage and second preimage) means we expect to test about as many inputs as there are hash values (2**n for an n-bit hash). For collisions, due to the birthday "paradox", we expect to test about as many inputs as the square root of the number of hash values (2**(n/2) for an n-bit hash).
Not all uses of hash functions require collision resistance (for example, the security of HMAC does not depend on collision resistance). The main application where collision resistance is important is digital signatures.
We consider SHA-1 to no longer be collision resistant because there are known techniques to produce a collision in significantly less than 2**80 steps. However, as far as I know nobody has actually found two colliding inputs for SHA-1 yet (and told us about them).
I don't think there are any known preimage or second preimage attacks on SHA-1.
(Score: 3, Informative) by sjames on Friday May 01 2015, @09:15AM
That's not the problem. In many cases, that version field is already there in some form or another. The problem is that the older releases can't do anything with a SHA2 hash. They don't support SHA2. Newer releases can deal with SHA-1 and SHA-2 by looking at the version field to decide which to use. That means maximum compatibility says use SHA-1, but security concerns say use SHA-2.
To make matters more difficult, SHA-1 isn't broken, it's weakened. That is, in *SOME* cases there are ways easier than brute force to alter an object without changing it's SHA-1 hash. Only some of those cases are useful.