Papas Fritas writes:
"Last October, Bruce Schneier speculated that the three characteristics of a good backdoor are a low chance of discovery, high deniability if discovered, and minimal conspiracy to implement. He now says that the critical iOS and OSX vulnerability that Apple patched last week meets these criteria, and could be an example of a deliberate change by a bad actor:
Look at the code. What caused the vulnerability is a single line of code: a second "goto fail;" statement. Since that statement isn't a conditional, it causes the whole procedure to terminate ... Was this done on purpose? I have no idea. But if I wanted to do something like this on purpose, this is exactly how I would do it.
He later added that 'if the Apple auditing system is any good, they will be able to trace this errant goto line to the specific login that made the change.'
Steve Bellovin, professor of Computer Science in Columbia University and Chief Technologist of the Federal Trade Commission, has another take on the vulnerability: 'It may have been an accident; If it was enemy action, it was fairly clumsy.'"
(Score: 5, Interesting) by AudioGuy on Sunday March 02 2014, @12:41AM
if ((err = SSLHashSHA1.update(&hashCtx, &serverRandom)) != 0)
goto fail;
if ((err = SSLHashSHA1.update(&hashCtx, &signedParams)) != 0)
goto fail;
goto fail;
if ((err = SSLHashSHA1.final(&hashCtx, &hashOut)) != 0)
goto fail;
I wish *my* C bugs were so obvious as this one. ;-)
This means that not only was this missed, but that the function, a vital security function, was never even tested.
The comments on Bruces blog are worth reading as well. https://www.schneier.com/blog/archives/2014/02/was _the_ios_ssl.html [schneier.com]
It is very hard to see how this one could have been accidental.
(Score: 5, Interesting) by FatPhil on Sunday March 02 2014, @12:50AM
Let's pour some kerosene on that...
http://daringfireball.net/2014/02/apple_prism
"""
Jeffrey Grossman, on Twitter:
> I have confirmed that the SSL vulnerability was introduced in iOS 6.0. It is not present in 5.1.1 and is in 6.0.
iOS 6.0 shipped on 24 September 2012.
According to slide 6 in the leaked PowerPoint deck on NSA’s PRISM program, Apple was “added†in October 2012.
These three facts prove nothing; it’s purely circumstantial. But the shoe fits.
"""
Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
(Score: 1) by forsythe on Sunday March 02 2014, @02:30AM
If anything speaks to it being "enemy action", it's this. Anybody could make this error accidentally. Perhaps the chances are astronomically low of somebody making this error and not noticing it, but perhaps the codebase is astronomically large. I can even see something like this being ignored in a diff-only peer review, though that's a bigger stretch.
Somehow engineering circumstances to have this not tested, however, pretty much has to be intentional.
(Score: 2) by frojack on Sunday March 02 2014, @04:22AM
But, AS I understand the bug, that line of code was meant to detect a bad cert, or man in the middle attack.
Even in wide scale testing, you are not likely to encounter that in the real world. And in this case, it would just allow the site to load as normal. You'd be owned, but none the wiser. The code would pass the test.
Adam Langley on his Blog [imperialviolet.org] coded up a cute harmless little demonstrator for this bug.
This is the direct URL https://www.imperialviolet.org:1266/ [imperialviolet.org]
Chrome just says No Way.
Firefox spits confusing jargon that translates to No Way.
Even cursty old Kong catches this.
So unless you has a deliberately created bad web site to test with, you would never see this bug. Seems accidental that it was found at all.
No, you are mistaken. I've always had this sig.
(Score: 4, Insightful) by forsythe on Sunday March 02 2014, @05:07AM
Sure, in the real world this bug would be hard to detect. But I find it hard to believe that anyone at Apple would approve a function for detecting bad certs that didn't even have a test record including data that [should have] failed sslRawVerify (which, as I understand it, is the key step that the goto skips). That's the sort of thing big, professional software companies are supposed to do, isn't it? That leaves a few possibilities: either the test record was doctored, the test cases were carefully constructed not to expose this bug, or there simply weren't any tests intended to cover this case.
Hanlon's Razor says the third case is most likely, but I'm not so sure I should trust it in this case.
(Score: 4, Interesting) by chr1sb on Sunday March 02 2014, @04:36AM
To reduce the likelihood of these issues arising, the code can be structured in a different way, with no need for gotos, more protection against such merge issues and with structural flaws being more obvious:
static OSStatus
SSLVerifySignedServerKeyExchange(SSLCon
uint8_t *signature, UInt16 signatureLen)
{
OSStatus err;
if ( (0 != (err = SSLHashSHA1.update(&hashCtx, &serverRandom)))
|| (0 != (err = SSLHashSHA1.update(&hashCtx, &signedParams)))
|| (0 != (err = SSLHashSHA1.final(&hashCtx, &hashOut))))
{
SSLFreeBuffer(&signedHashes);
SSLFreeBuffer(&hashCtx);
return err;
}
}
(Score: 2) by maxwell demon on Sunday March 02 2014, @01:15PM
Dijkstra obviously was right.
The Tao of math: The numbers you can count are not the real numbers.