Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Tuesday April 18 2017, @01:43AM   Printer-friendly
from the we-should-demand-it dept.

Seventy years into the computer age, Moshe Y. Vardi at ACM wants to know why we still do not seem to know how to build secure information systems:

Cyber insecurity seems to be the normal state of affairs these days. In June 2015, the U.S. Office of Personnel Management announced it had been the target of a data breach targeting the records of as many as 18 million people. In late 2016, we learned about two data breaches at Yahoo! Inc., which compromised over one billion accounts. Lastly, during 2016, close to 20,000 email messages from the U.S. Democratic National Committee were leaked via WikiLeaks. U.S. intelligence agencies argued that the Russian government directed the breaches in an attempt to interfere with the U.S. election process. Furthermore, cyber insecurity goes way beyond data breaches. In October 2016, for example, emergency centers in at least 12 U.S. states had been hit by a deluge of fake emergency calls. What cyber disaster is going to happen next?

[...] The basic problem, I believe, is that security never gets a high-enough priority. We build a computing system for certain functionality, and functionality sells. Then we discover security vulnerabilities and fix them, and security of the system does improve. Microsoft Windows 10 is much, much better security-wise than Windows XP. The question is whether we are eliminating old vulnerabilities faster than we are creating new ones. Judging by the number of publicized security breaches and attacks, the answer to that question seems to be negative.

This raises some very fundamental questions about our field. Are we investing enough in cybersecurity research? Has the research yielded solid scientific foundations as well as useful solutions? Has industry failed to adopt these solutions due to cost/benefit? More fundamentally, how do we change the trajectory in a fundamental way, so the cybersecurity derivative goes from being negative to being positive?

Previously:
It's 2015. Why do we Still Write Insecure Software?
Report Details Cyber Insecurity Incidents at Nuclear Facilities


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @07:00PM (2 children)

    by Anonymous Coward on Tuesday April 18 2017, @07:00PM (#495967)

    If you want to be called an "engineer" you should be legally liable for your product collapsing. Structural engineers don't stamp plans that are unsafe. Software "engineers" shouldn't be allowed to sign off on code that could even possibly be unsafe. If they do, they need to pay.

    It's not a fair comparison.

    Any bridge will fail when a dedicated attacker with unbounded time and resources is determined to bring the bridge down by any means necessary. The engineer who signed off on the bridge design should not be held responsible because they failed to make the bridge impervious to thermonuclear explosions.

    But with software vulnerabilities, this is the kind of attacker we are talking about.

  • (Score: 0) by Anonymous Coward on Wednesday April 19 2017, @07:39AM

    by Anonymous Coward on Wednesday April 19 2017, @07:39AM (#496176)

    Because script kids have nukes.

  • (Score: 1) by Scruffy Beard 2 on Wednesday April 19 2017, @08:59PM

    by Scruffy Beard 2 (6030) on Wednesday April 19 2017, @08:59PM (#496545)

    For the most part, commercial software lacks formal validation.

    They do ad-hoc debugging until is seems to work.

    It is not reasonable to expect a small VOIP system to withstand a 1Gbps DDOS, but executing arbitrary code due to invalid input should not happen.