Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday February 28 2019, @02:55PM   Printer-friendly
from the hello-entropy dept.

The National Vulnerability Database (NVD) is a US government-funded resource that does exactly what the name implies-acts as a database of vulnerabilities in software. It operates as a superset of the Common Vulnerabilities and Exposures (CVE) system, operated by the non-profit Mitre Corporation, with additional government funding. For years, it has been good enough—while any organization or process has room to be made more efficient, curating a database of software vulnerabilities reported through crowdsourcing is a challenging undertaking.

Risk Based Security, the private operator of competing database VulnDB, aired their grievances with the public CVE/NVD system in their 2018 Vulnerability Trends report, released Wednesday, with charged conclusions including "there is fertile grounds for attorneys and regulators to argue negligence if CVE/NVD is the only source of vulnerability intelligence being used by your organization," and "organizations are getting late and at times unreliable vulnerability information from these two sources, along with significant gaps in coverage." This criticism is neither imaginative, nor unexpected from a privately-owned competitor attempting to justify their product.

In fairness to Risk Based Security, there is a known time delay in CVSS scoring, though they overstate the severity of the problem, as an (empirical) research report finds that "there is no reason to suspect that information for severe vulnerabilities would tend to arrive later (or earlier) than information for mundane vulnerabilities."

https://www.techrepublic.com/article/software-vulnerabilities-are-becoming-more-numerous-less-understood/


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by DannyB on Friday March 01 2019, @03:34PM (2 children)

    by DannyB (5839) Subscriber Badge on Friday March 01 2019, @03:34PM (#808716) Journal

    The earliest GC research I personally read about was in the 80s. I'm sure it goes further back. But once you move back to the 70's, memory is so precious, and so limited, that it is difficult to imagine what good you can do to make things better.

    I have a decent sized hardcover textbook on GC that I bought in the 90's. It covered amazing developments in GC that I found quite informative. From what I see in the various Java GCs, that textbook is clearly obsolete now.

    --
    To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by hendrikboom on Saturday March 02 2019, @12:00AM (1 child)

    by hendrikboom (1125) Subscriber Badge on Saturday March 02 2019, @12:00AM (#809005) Homepage Journal

    The original garbage collector was part of the original Lisp implementation.

    The advance I remembered from the 60's was a technique for garbage-collecting Lisp memory without needing extra storage for a stack It used two marking bits in every cell. It reversed pointers so as to have a trail back and then reversed them again while backing out, using the cells themselves to contain a stack. It used two marking bits in every cell so as to mark what had been visited, and also to remember which pointers had been reversed.

    This was invented precisely because memory was so precious and limited!

    • (Score: 2) by DannyB on Monday March 04 2019, @02:56PM

      by DannyB (5839) Subscriber Badge on Monday March 04 2019, @02:56PM (#809779) Journal

      I realize GC goes back to original Lisp. I'm sure early implementations were simple mark/sweep.

      In an excellent GC hardcover textbook I purchased in the early 90s, it did cover the technique you describe of reversing the pointers and then back again. Another technique it introduced me to was for when a system had virtual memory. Simply have two separate address spaces within virtual memory. Old and New. To GC you copy objects from Old space to New space. As you visit objects that are live, they are copied to New space, and the object in old space is replaced with a tagged forwarding address to new space. As you see pointers to objects in old space, if the object has a forwarding address, simply update the pointer as the object has already been moved. Once you've visited all of the live objects, everything live is now in New space, and Old space can be discarded or paged out. eg, none of its pages need to be swapped into real memory pages in RAM. Plus, New space is compacted, all blocks consecutively allocated.

      Then there were treadmill approaches and coloring approaches and discussion of how to make a GC run concurrently with the program (eg 'mutator'). As I have read about all of the various GCs available in Java, I am impressed by the amount of research that has been done in the last couple decades. The cost of GC is now astonishingly low. Stop-The-World pauses are very short -- typically just to mark the root set and then let the program continue while GC proceeds to collect. GC can now run on multiple cpu threads. Things are tuned so well with workloads that Full-GC operations never happen -- because they would cause long pauses. The design of modern GC seems to be to perpetually avoid ever doing a Full GC. It just keeps up with the mutator.

      --
      To transfer files: right-click on file, pick Copy. Unplug mouse, plug mouse into other computer. Right-click, paste.