Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by Fnord666 on Tuesday March 21 2017, @05:53AM   Printer-friendly
from the keep-it-to-yourself dept.

RAND corporation recently received rare access to study a couple hundred 0-day vulnerabilities and their exploits.

It turns out that 0-day vulnerability discoveries live for about 6.9 years, and that the ones found by a pair of serious opponents (typically nation-state governments) have only a few percent overlap. This means that releasing discoveries to the public provides very little defensive value while obviously destroying offensive ability.

The report (summary and full text[PDF]) includes quite a bit more about the industry, including some estimates of pricing and headcount.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by RamiK on Tuesday March 21 2017, @11:51AM (1 child)

    by RamiK (1813) on Tuesday March 21 2017, @11:51AM (#482057)

    There are some good arguments for open source buried in the appendix: In "Additional Figures and Tables" the figures for "Frequencies of Exploit-Level Characteristics Among 127 Identified Exploits" (p.89) stand at 50 open source, 70 closed source, 1 mixed and 6 unknown. And in "More Information About the Data" under "Data Frequency Counts" (p.101) the ratio is repeated with 123 closed and 74 open.

    That's to say, open-source is more secure even at the nation state level.

    --
    compiling...
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Tuesday March 21 2017, @02:10PM

    by Anonymous Coward on Tuesday March 21 2017, @02:10PM (#482133)

    There may be 50 open source and 70 closed source, but this doesn't tell you what they got it from. This is just an attribute of the sample set.

    For example, if they had 3 people finding bugs in open source and 37 equally-skilled people finding bugs in closed source, we could conclude that finding bugs in open source is easier.

    As they say, "future analyses may want to examine Linux versus other platform types, the similarity of open and closed source code, and exploit class type".

    For now, they haven't done that. It may be that the data was insufficient to do so, or that RAND was less interested, or that RAND was in a rush to publish, or that RAND wants to milk this for as many reports as possible.