Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday February 28 2019, @02:55PM   Printer-friendly
from the hello-entropy dept.

The National Vulnerability Database (NVD) is a US government-funded resource that does exactly what the name implies-acts as a database of vulnerabilities in software. It operates as a superset of the Common Vulnerabilities and Exposures (CVE) system, operated by the non-profit Mitre Corporation, with additional government funding. For years, it has been good enough—while any organization or process has room to be made more efficient, curating a database of software vulnerabilities reported through crowdsourcing is a challenging undertaking.

Risk Based Security, the private operator of competing database VulnDB, aired their grievances with the public CVE/NVD system in their 2018 Vulnerability Trends report, released Wednesday, with charged conclusions including "there is fertile grounds for attorneys and regulators to argue negligence if CVE/NVD is the only source of vulnerability intelligence being used by your organization," and "organizations are getting late and at times unreliable vulnerability information from these two sources, along with significant gaps in coverage." This criticism is neither imaginative, nor unexpected from a privately-owned competitor attempting to justify their product.

In fairness to Risk Based Security, there is a known time delay in CVSS scoring, though they overstate the severity of the problem, as an (empirical) research report finds that "there is no reason to suspect that information for severe vulnerabilities would tend to arrive later (or earlier) than information for mundane vulnerabilities."

https://www.techrepublic.com/article/software-vulnerabilities-are-becoming-more-numerous-less-understood/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday March 01 2019, @02:28AM (1 child)

    by Anonymous Coward on Friday March 01 2019, @02:28AM (#808523)

    Jesus tap dancing Christ, is someone STILL complaining about C strings being a problem?
    You do realize that pretty much all string operations in C are just function calls and that you can WRITE YOUR OWN string handling libraries to replace the ancient 1970s ones?
    It's such a standard approach that you can download such a library ready to go or write your own in half a day if you are willing to omit Unicode fancy stuff.

    Everyone has pretty much decided on a string representation as a struct with one field holding the string data in UTF-8, null terminated (UTF-8 was specifically designed to allow for C string backward compatibility); another field holding the length in bytes of the string.

    We had in house written counted-length strings and string functions (in C) at my first job almost 25 years ago!

    Sure "one standard" for C strings using counted length would be best, as would arrays that had a field for array size, but it is possible to add these via a library. The problem is solvable/solved.

  • (Score: 2) by DannyB on Friday March 01 2019, @03:31PM

    by DannyB (5839) Subscriber Badge on Friday March 01 2019, @03:31PM (#808715) Journal

    You do realize . . .

    I very much realize that I was able to write some C++ classes in the 1990s that realized Pascal like strings. Nothing in C prevents that.

    Problem: anybody's non-standard strings are not the real strings that everyone uses in C.

    As a bonus my string classes did lazy copy on write. That is if you assigned, copied, etc a string to another string, they shared the common character buffer. As soon as one string was to be modified, then it would do a copy on write, cloning the character buffer and make the modifications -- if the string was shared. Obviously this was a reference counted garbage collection -- but there could be no data structure cycles.

    Of course, this is before I knew about the advantages of immutable data structures and hardware wasn't as cheap as it is today.

    Then I realized I needed to build a unicode variant of these. And I had other library classes I built up, File IO, etc. And everything used my own strings.

    Problem: my strings weren't the STANDARD strings. So I had to have adapters to work with other libraries.

    You do realize that maybe even in the 90's I knew more than you give me credit for.

    By the late 1990s, every good C++ compiler was still a (different) subset of the proposed standard. I finally gave up on C++, and started looking at Java. Cross platform was also one of my major goals. I soon realized that despite the platform runtime costs, it solved ALL of my checklist problems and requirements. But is not perfect for everyone nor for all uses.

    --
    The lower I set my standards the more accomplishments I have.