Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Sunday April 19 2015, @08:47AM   Printer-friendly
from the the-bugs-you-know-versus-those-you-don't dept.

Dan Geer at CIA funded In-Q-Tel looks at approaches for estimating vulnerabilities in software. PDF: http://geer.tinho.net/fgm/fgm.geer.1504.pdf

The motivation is this article by Bruce Schneier on whether the NSA should patch or exploit vulnerabilities. Quoting from the Geer article:

In a May 2014 article in The Atlantic [3], Bruce Schneier asked a cogent, first-principles question: “Are vulnerabilities in software dense or sparse?” If they are sparse, then every vulnerability you find and fix meaningfully lowers the number of vulnerabilities that are extant. If they are dense, then finding and fixing one more is essentially irrelevant to security and a waste of the resources spent finding it. Six-take-away-one is a 15% improvement. Six-thousand-take-away-one has no detectable value.

In Schneier's words:

There is no way to simultaneously defend U.S. networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, “every offensive weapon is a (potential) chink in our defense—and vice versa.” ...

If vulnerabilities are plentiful—and this seems to be true—the ones the U.S. finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won’t make it appreciably harder for criminals to find the next one. We don’t really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.

The Geer article has some interesting references: especially this well-titled analysis of OpenBSD's code base: "Milk or Wine: Does Software Security Improve with Age?" (PDF)

Over a period of 7.5 years and fifteen releases, 62% of the 140 vulnerabilities reported in OpenBSD were foundational : present in the code at the beginning of the study. It took more than two and a half years for the first half of these foundational vulnerabilities to be reported. We found that 61% of the source code in the final version studied is foundational: it remains unaltered from the initial version released 7.5 years earlier. The rate of reporting of foundational vulnerabilities in OpenBSD is thus likely to continue to greatly influence the overall rate of vulnerability reporting.

Schneier poses some interesting questions at the end. What do you Soylentils think?

Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won’t know about it. But if it doesn’t, it’s deliberately leaving the U.S. vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we’re nowhere near that point today.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by Anonymous Coward on Sunday April 19 2015, @08:55AM

    by Anonymous Coward on Sunday April 19 2015, @08:55AM (#172773)

    Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack?

    They should fund thousands of extra PUBLIC security researchers looking at code full time for vulnerabilities, to increase the pace of bugfixing. Then they should strategically leak the vulnerability to one of the researchers. This way they are actually doing something useful/defensive for the tech community, and the security researcher can claim responsibility for finding the bug, giving the NSA cover.

    Until NSA is working to increase security, the NSA's response to these hypothetical scenarios don't matter. The tech community should not trust the NSA and should fix as many vulnerabilities as it can.

    Starting Score:    0  points
    Moderation   +4  
       Insightful=4, Total=4
    Extra 'Insightful' Modifier   0  

    Total Score:   4  
  • (Score: 4, Insightful) by Anonymous Coward on Sunday April 19 2015, @09:20AM

    by Anonymous Coward on Sunday April 19 2015, @09:20AM (#172776)

    They should fund thousands of extra PUBLIC security researchers looking at code full time for vulnerabilities, to increase the pace of bugfixing.

    This will never happen because:
    - these individuals will stumble upon, and report, some of the vulnerabilities currently exploited by the TLAs. That is, unless all vulnerabilities are reported to the NSA's newly formed subsidiary, the Software Exploitation Authority.
    - we'll never know if the SEA will forward the vulnerabilities to the appropriate software companies or exploit the reported vulnerabilities themselves.
    - you can never trust the NSA.

  • (Score: 2) by FatPhil on Sunday April 19 2015, @08:22PM

    by FatPhil (863) <pc-soylentNO@SPAMasdf.fi> on Sunday April 19 2015, @08:22PM (#172920) Homepage
    No. Funding people to find bugs is indirectly funding other people to introduce bugs. Compare snakes, rats, etc. ... Google "Cobra Effect"
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves