Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Tuesday April 18 2017, @01:43AM   Printer-friendly
from the we-should-demand-it dept.

Seventy years into the computer age, Moshe Y. Vardi at ACM wants to know why we still do not seem to know how to build secure information systems:

Cyber insecurity seems to be the normal state of affairs these days. In June 2015, the U.S. Office of Personnel Management announced it had been the target of a data breach targeting the records of as many as 18 million people. In late 2016, we learned about two data breaches at Yahoo! Inc., which compromised over one billion accounts. Lastly, during 2016, close to 20,000 email messages from the U.S. Democratic National Committee were leaked via WikiLeaks. U.S. intelligence agencies argued that the Russian government directed the breaches in an attempt to interfere with the U.S. election process. Furthermore, cyber insecurity goes way beyond data breaches. In October 2016, for example, emergency centers in at least 12 U.S. states had been hit by a deluge of fake emergency calls. What cyber disaster is going to happen next?

[...] The basic problem, I believe, is that security never gets a high-enough priority. We build a computing system for certain functionality, and functionality sells. Then we discover security vulnerabilities and fix them, and security of the system does improve. Microsoft Windows 10 is much, much better security-wise than Windows XP. The question is whether we are eliminating old vulnerabilities faster than we are creating new ones. Judging by the number of publicized security breaches and attacks, the answer to that question seems to be negative.

This raises some very fundamental questions about our field. Are we investing enough in cybersecurity research? Has the research yielded solid scientific foundations as well as useful solutions? Has industry failed to adopt these solutions due to cost/benefit? More fundamentally, how do we change the trajectory in a fundamental way, so the cybersecurity derivative goes from being negative to being positive?

Previously:
It's 2015. Why do we Still Write Insecure Software?
Report Details Cyber Insecurity Incidents at Nuclear Facilities


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by NotSanguine on Tuesday April 18 2017, @06:27AM (5 children)

    We do now how to build secure systems. They are usually locked up, locked down and usable only by small amounts of trained people. It would seem that the large insecurity problem comes from the large general goods and consumer market, things are supposed to be cheap and easy to use. You are stripping away security to make them, in lack of a better word, user friendly and available. Which in turn seems to spread like a digital cancer into the secure world to.

    An excellent point. In the enterprise world, the level of security is only worth the value of the assets being protected. This is generally a pretty simple cost/benefit analysis, with resources being allotted to security proportionately. It doesn't always happen like that, since some organizations (fewer and fewer these days) are unconcerned about IT security (and often physical security as well).

    The consumer world is a vastly different animal, however. Some folks want the new shiny, others want convenience, everyone wants it cheap, and most don't give a thought to the potential security risks of web accessible garage door openers and Google Home/Alexa devices, etc., etc., etc.

    Vendors know that and expend resources on security only when they've gotten burned (bad publicity, class-action lawsuits, etc.), and not always then. This is nothing new, nor is it unique to technology. Vendors do cost/benefit analyses too, to determine whether safety or security issues should be addressed up-front, or if the potential legal liabilities are less costly.

    That sort of thinking is much, much worse when it comes to pharmaceuticals, children's toys, airbags and any number of other things which can have much direr consequences than poorly secured technology.

    Case in point, major pharmaceutical companies sold blood products they knew were tainted with AIDS [mercola.com]. More detail about this can be found here [wikipedia.org].

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 1) by pTamok on Tuesday April 18 2017, @09:38AM (2 children)

    by pTamok (3042) on Tuesday April 18 2017, @09:38AM (#495766)

    I really, really wouldn't use mercola.com as a reference. It actively detracts from the credibility of what you are saying (and the contaminated blood-products story is worth saying).

    https://sciencebasedmedicine.org/joe-mercola-quackery-pays/ [sciencebasedmedicine.org]
    http://www.quackwatch.com/11Ind/mercola.html [quackwatch.com]
    http://scienceblogs.com/insolence/2012/08/03/15-years-of-promoting-quackery/ [scienceblogs.com]

    • (Score: 4, Interesting) by NotSanguine on Tuesday April 18 2017, @04:46PM (1 child)

      I really, really wouldn't use mercola.com as a reference. It actively detracts from the credibility of what you are saying (and the contaminated blood-products story is worth saying).

      Thanks for the heads up. I was unaware that Mercola (I hadn't seen it before) was a quackery website. I only used it as it was high up in the search results. :(

      However, that particular story is true. What's more, back in the 1970s, Bayer and other vendors of Clotting Factor VII knew that there was a risk that their products were tainted (with hepatitis and other blood-borne diseases as well as what would later be called HIV) and didn't test the blood used for their products.

      Given that Factor VII is made from human blood plasma [cdc.gov] (factor VIII is not, and wasn't approved for use in the US until 1992, hmm I wonder why research into that really took off after the mass infection and death of hemophiliacs around the world?), there was always that risk. Bayer, et al decided that paying off settlements was cheaper than testing the plasma used to make factor VII.

      Large numbers of hemophiliacs (who require clotting factor), including my brother in-law, in the US and elsewhere were infected with HIV. Since protease inhibitors [wikipedia.org] were unavailable until late in 1996, most of those folks died slow, painful deaths.

      In any case, that's just one example of corporations deciding that safety was too expensive. And it caused many thousands to die slowly, painfully and unnecessarily.

      --
      No, no, you're not thinking; you're just being logical. --Niels Bohr
      • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @09:32PM

        by Anonymous Coward on Tuesday April 18 2017, @09:32PM (#496029)

        Which would have been trivially solved if every individual was able to negotiate their own contracts with...

        Oh who am I kidding, how would that possibly have stopped this problem?

        "I'm sorry but you didn't sign up for the premium plan which costs only 5000% more so we're not liable for tainted products. Good luck with your AIDS, we do offer a treatment program that costs more per year than you make in a lifetime, but good news it comes with an indentured service clause. If you miss a payment you simply get enrolled in our work-to-live program and we keep your treatments going for the duration of the contract." Queue poor suckers (what you don't wanna die???) being shipped to a mining colony and living the rest of their life in abject misery.

  • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @06:45PM (1 child)

    by Anonymous Coward on Tuesday April 18 2017, @06:45PM (#495958)

    An excellent point. In the enterprise world, the level of security is only worth the value of the assets being protected. This is generally a pretty simple cost/benefit analysis, with resources being allotted to security proportionately.

    Not simply the value of the asset: security is worth the cost of recovery due to a breach times the likelihood of such a breach occurring.

    Very simple example: I need a snow shovel to clear my walkway, and I am considering what will happen if my shovel is stolen.

    If my shovel is stolen, it will cost me $20 plus a trip to the store to buy a new one. This will probably take about 15 minutes, so if I value my time at $20/hr the total cost of recovery is $25.

    Now, I can't know the exact probability of my shovel being stolen. Where I live it is probably not 0 but should be close to it because I have never heard of snow shovels walking away on their own. So I will have to make up a number, say 0.01 (estimating that one out of every 100 shovels will be stolen).

    With those two estimates, I can conclude that securing my shovel is worth about $0.25. Since I valued my time at $20/hr, this means I am wasting my time securing my shovel if it takes more than 45 seconds over the entire lifetime of the shovel.

    Pretty much any recurring inconvenience will add up to more than 45 seconds over the shovel's lifetime. Therefore, I should maximize availability by leaving the shovel unsecured, close to where I will need it.

    • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @09:35PM

      by Anonymous Coward on Tuesday April 18 2017, @09:35PM (#496031)

      A clear example of why we need publicly funded police (discourage crime) and legislation to require software security. If left up to spreadsheets we should kill off the vast majority of human beings instead of trying to fix our systemic issues. Bean counters are the worst and should be sent to the backseat instead of running Wallstreet and screwing over everyone else just to get some more beans.