Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Tuesday April 18 2017, @01:43AM   Printer-friendly
from the we-should-demand-it dept.

Seventy years into the computer age, Moshe Y. Vardi at ACM wants to know why we still do not seem to know how to build secure information systems:

Cyber insecurity seems to be the normal state of affairs these days. In June 2015, the U.S. Office of Personnel Management announced it had been the target of a data breach targeting the records of as many as 18 million people. In late 2016, we learned about two data breaches at Yahoo! Inc., which compromised over one billion accounts. Lastly, during 2016, close to 20,000 email messages from the U.S. Democratic National Committee were leaked via WikiLeaks. U.S. intelligence agencies argued that the Russian government directed the breaches in an attempt to interfere with the U.S. election process. Furthermore, cyber insecurity goes way beyond data breaches. In October 2016, for example, emergency centers in at least 12 U.S. states had been hit by a deluge of fake emergency calls. What cyber disaster is going to happen next?

[...] The basic problem, I believe, is that security never gets a high-enough priority. We build a computing system for certain functionality, and functionality sells. Then we discover security vulnerabilities and fix them, and security of the system does improve. Microsoft Windows 10 is much, much better security-wise than Windows XP. The question is whether we are eliminating old vulnerabilities faster than we are creating new ones. Judging by the number of publicized security breaches and attacks, the answer to that question seems to be negative.

This raises some very fundamental questions about our field. Are we investing enough in cybersecurity research? Has the research yielded solid scientific foundations as well as useful solutions? Has industry failed to adopt these solutions due to cost/benefit? More fundamentally, how do we change the trajectory in a fundamental way, so the cybersecurity derivative goes from being negative to being positive?

Previously:
It's 2015. Why do we Still Write Insecure Software?
Report Details Cyber Insecurity Incidents at Nuclear Facilities


Original Submission

Related Stories

Report Details Cyber Insecurity Incidents at Nuclear Facilities 3 comments

from the mission-critical-systems-pwned dept.

El Reg reports

The nuclear industry is ignorant of its cybersecurity shortcomings, claimed a report released [October 5] and, despite understanding the consequences of an interruption to power generation and the related issues, cyber efforts to prevent such incidents are lacking.

The report adds that search engines can "readily identify critical infrastructure components with" VPNs, some of which are power plants. It also adds that facility operators are "sometimes unaware of" them.

Nuclear plants don't understand their cyber vulnerability, stated the Chatham House report, which found industrial, cultural, and technical challenges affecting facilities worldwide. It specifically pointed to a "lack of executive-level awareness".

[...] Among [the 18-month study's] more frightening discoveries is that the notion "nuclear facilities are 'air gapped'" is a "myth", as "the commercial benefits of internet connectivity mean[s] that nuclear facilities" are increasingly networked.

[More after the break.]

It's 2015. Why do we Still Write Insecure Software? 78 comments

Software developer Jeremy Bowers has an interesting article about why it's so hard to write secure software. In summary (and I quote):

Let's talk about why it's so hard. My thesis is simple: We write insecure software because our coding environment makes it easier to write insecure software than secure software.

But exploring what it fully means can lead some surprising places. Please join me on a journey as I try to show you why that is not trivially true, but in fact, profoundly true. We do not occasionally pick up insecure tools, like a broken encryption routine or misusing a web framework; we are fish swimming in an ocean of insecurity, oblivious to how steeped in it we are.


What say you, Soylentils? Do you find that your software development environment and/or tools make it difficult to write secure software? What frustrations have you encountered? How have you worked around them?

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by Snotnose on Tuesday April 18 2017, @02:11AM (21 children)

    by Snotnose (1623) on Tuesday April 18 2017, @02:11AM (#495649)

    You have a backdoor into my router that gets hijacked? I should be able to sue you for 7 figures. Likewise any other intrusion.

    Until that happens we are all vulnerable.

    / yeah, I understand folks like the NSA exist
    // but for fuck's sake, give us tech types a fighting chance
    /// We're all screwed in the short term, at least by the time I die (I'm 58)

    --
    Why shouldn't we judge a book by it's cover? It's got the author, title, and a summary of what the book's about.
    • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @04:30AM (1 child)

      by Anonymous Coward on Tuesday April 18 2017, @04:30AM (#495680)

      A functioning market requires that there be consequences, whether negative or positive, for any given interaction.

      Society has become very poorly defined; nobody really understands who's liable anymore—and that includes a lack of appreciation for personal responsibility.

      • (Score: 2) by Immerman on Tuesday April 18 2017, @01:13PM

        by Immerman (3985) on Tuesday April 18 2017, @01:13PM (#495815)

        Unfortunately someone decided corporations were a good idea - a legal construct designed specifically to shield everyone involved from any personal consequences except the accumulation of money.

    • (Score: 4, Insightful) by julian on Tuesday April 18 2017, @04:40AM (17 children)

      by julian (6003) Subscriber Badge on Tuesday April 18 2017, @04:40AM (#495683)

      If you want to be called an "engineer" you should be legally liable for your product collapsing. Structural engineers don't stamp plans that are unsafe. Software "engineers" shouldn't be allowed to sign off on code that could even possibly be unsafe. If they do, they need to pay.

      Software is math. Math can be done more perfectly, rigorously, and unerringly, than any other discipline undertaken by we thinking apes. Thus software "engineers" have even less excuse than structural engineers.

      ...and we don't give structural engineers very much room for excuse.

      • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @04:49AM (8 children)

        by Anonymous Coward on Tuesday April 18 2017, @04:49AM (#495686)

        The obligations of the engineer or the client should not be one-size-fits-all; this "There should be a law!!111" cry is what makes life so abhorrently miserable for all of us.

        What you are identifying is the fact that our society is very poorly defined; nobody really has any clear idea what's going on, or how one individual is obliged to another. What there should be, then, is a more robust way for individuals to negotiate and enforce well-defined contracts, and to resolve disputes thereby.

        • (Score: 2) by julian on Tuesday April 18 2017, @05:19AM (1 child)

          by julian (6003) Subscriber Badge on Tuesday April 18 2017, @05:19AM (#495690)

          And then the autistic screeching about coercion starts. Enjoy, everyone!

          Meanwhile, the rest of us need a society that works balancing physical reality with human psychology.

          • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @05:31AM

            by Anonymous Coward on Tuesday April 18 2017, @05:31AM (#495693)

            Whatevs, Julianbo.

        • (Score: 3, Interesting) by NotSanguine on Tuesday April 18 2017, @06:07AM (4 children)

          What there should be, then, is a more robust way for individuals to negotiate and enforce well-defined contracts, and to resolve disputes thereby.

          Oh, there are lots of well defined [cisco.com] contracts already. In addition to the one linked above, here are a few more:
          http://www.linksys.com/ua/end-user-license-agreement/ [linksys.com]
          http://www.samsung.com/us/support/HQ_index_EULA_popup.html [samsung.com]
          https://www.apple.com/legal/internet-services/itunes/appstore/dev/stdeula/ [apple.com]
          https://www.xfinity.com/Corporate/Customers/Policies/SubscriberAgreement.html [xfinity.com]
          Microsoft's contracts are so well defined that they have customised them by point of purchase and product [microsoft.com]

          There are hundreds more I could list, but you get the idea. The issue isn't not having well defined contracts, it's that the contracts we do have are designed specifically to dump all risk on the end user and ensure that whether a corporation and/or its products does what it's supposed to do or not, the corporation is held harmless in any event.

          Even website click-through (click-wrap) contracts have been routinely found to be legally binding, although enforcement of a simple access (browsewrap) contracts have not [americanbar.org].

          Go ahead and read through all those well defined contracts, then explain to me, given the mechanisms for purchase and access, how an end user can "negotiate" fair terms, even with a "well defined contract"?

          tl;dr: your ideas are interesting and I would like to subscribe to your newsletter.

          --
          No, no, you're not thinking; you're just being logical. --Niels Bohr
          • (Score: 1, Insightful) by Anonymous Coward on Tuesday April 18 2017, @06:34AM (3 children)

            by Anonymous Coward on Tuesday April 18 2017, @06:34AM (#495712)
            • There does not exist a robust means by which individuals can negotiation and enforce well defined contracts, or resolves disputes thereby.

            • Users gladly agree to whatever. Thus, are they not to blame?

            • (Score: 2) by NotSanguine on Tuesday April 18 2017, @06:39AM (1 child)

              There does not exist a robust means by which individuals can negotiation and enforce well defined contracts, or resolves disputes thereby.

                      Users gladly agree to whatever. Thus, are they not to blame?

              Your ideas are interesting and I would like to subscribe to your newsletter. Podcast? Crayons on construction paper? Wall writing with your feces?

              --
              No, no, you're not thinking; you're just being logical. --Niels Bohr
              • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @06:52AM

                by Anonymous Coward on Tuesday April 18 2017, @06:52AM (#495723)

                Your urine allows very little in the way of communication.

            • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @01:17PM

              by Anonymous Coward on Tuesday April 18 2017, @01:17PM (#495817)

              Thus, are they not to blame?

              Ultimately, yes.

              To blame for what exactly? Sure, we can blame them and go neener neener, but what we've almost gotten at comes back to the thing about men and angels.

              There is no well-defined contract between, say, a cloud service provider, the user as a customer of this cloud, the user as an actor that routes traffic over the internet, and the botnet operator. If a user's network participates in a DDOS against a cloud that user has never even heard of, how could we possible enforce a contract against them? If we could do that, the user might be able to cascade the responsibility to the botnet operator or equipment manufacturer, etc by way of some other contract (or maybe the buck stops at the user due to various contracts).

              (It is within my capability to imagine some entity operating on the free market [which does not need to be an ISP necessarily] that all users of, say, a competing logical region of the internet requiring contracts with the botnet operator as another user or actor who routes traffic that would make enforcement possible in the abstract.)

              What we have here is a lack of any kind of system to deal with these things. We don't have an egalitarian contract system where everybody is an equal (and I continue to think such a thing is unworkable). Additionally, the warlord whose violent imposition the user is subject to is also derelict in his duty.

              The least we could hope for is for our violently imposed warlord (government) to bring his resources to bear on the (malicious) botnet operator; the (negligent) user, equipment manufacturer, ISP, OS vendor, etc; or some combination.

              The only reason we accept this warlord is because he protects us from other warlords. What good is a warlord who won't do that? He's derelict in his duty because he's simply allowing another warlord (the botnet operator) infringe on our quiet enjoyment (to borrow a term from the implied rental contract with our warlord) without even so much as trying to do anything about it.

        • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @09:15PM

          by Anonymous Coward on Tuesday April 18 2017, @09:15PM (#496019)

          Nice! I'm happy to see you changing it up a bit, and so far no downmod!

          There are times when society should create laws to protect against malfeasance. Public roads / bridges / other infrastructure should be held to a high safety standard, and that is only possible by creating a law with clear requirements. The biggest problem I see with eliminating government and moving to individual contracts is that no one is an expert on everything. This would make it very easy for a company to screw over its customers just to increase their profit margin.

          I imagine you might respond with "if they screw them over then no one would use that company and so the market would correct itself" but let me use an example.

          A small town needs a new bridge, so puts out a proposal and collects bids. They select a company that seems good, but the company ends up using cheaper materials and the bridge collapses in half the expected lifetime, say 10-20 years. Turns out this company had a history of doing this, so they get sued into oblivion but due to their corporate structure the owners were able to pocket the profits while tanking the company. Then they create a new company and start this process all over.

          More examples could be brought up, such as electronics devices not polluting the air waves and causing aircraft guidance problems, along with a million other circumstances which could easily be overlooked or actively ignored by private companies. Without a large agency to oversee these various issues there is no way to prevent such problems from happening. Whether you call it government, or the majority of people use a couple of trusted "oversight" companies, it doesn't mater. You effectively have the same concepts and humanity pays for it either through taxes or higher service / product costs.

          Personally I would choose the option of government for safety oversight. Private businesses have financial motivation for doing a bad job. Yes such corruption happens in government, but there is accountability. With private businesses there is much less accountability and history currently indicates that private businesses can and will make bad decisions in the pursuit of profit. Now we can have discussions about any particular piece of regulation, but you'll never convince me to hand the reigns over to profit motivated businesses.

          We have a big enough problem with corporate interests buying legislation, in your reality they would simply do as they please and the biggest corporations would become vertically integrated. In a very short time period you would have corporate states since who would be there to break up such monopolies? That would be 100X worse than government paired with private business.

      • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @07:00PM (2 children)

        by Anonymous Coward on Tuesday April 18 2017, @07:00PM (#495967)

        If you want to be called an "engineer" you should be legally liable for your product collapsing. Structural engineers don't stamp plans that are unsafe. Software "engineers" shouldn't be allowed to sign off on code that could even possibly be unsafe. If they do, they need to pay.

        It's not a fair comparison.

        Any bridge will fail when a dedicated attacker with unbounded time and resources is determined to bring the bridge down by any means necessary. The engineer who signed off on the bridge design should not be held responsible because they failed to make the bridge impervious to thermonuclear explosions.

        But with software vulnerabilities, this is the kind of attacker we are talking about.

        • (Score: 0) by Anonymous Coward on Wednesday April 19 2017, @07:39AM

          by Anonymous Coward on Wednesday April 19 2017, @07:39AM (#496176)

          Because script kids have nukes.

        • (Score: 1) by Scruffy Beard 2 on Wednesday April 19 2017, @08:59PM

          by Scruffy Beard 2 (6030) on Wednesday April 19 2017, @08:59PM (#496545)

          For the most part, commercial software lacks formal validation.

          They do ad-hoc debugging until is seems to work.

          It is not reasonable to expect a small VOIP system to withstand a 1Gbps DDOS, but executing arbitrary code due to invalid input should not happen.

      • (Score: 3, Insightful) by tibman on Tuesday April 18 2017, @09:18PM (1 child)

        by tibman (134) Subscriber Badge on Tuesday April 18 2017, @09:18PM (#496021)

        Internet connected software is like building a bridge where people are driving over it in a completely lawless fashion. They can ignore every warning on the bridge. Weight limits, speed limits, you name it. They can decide to do "bridge research" by jamming 1,000,000 cars into the entrance at once just to see what happens. They can build a car one km long and send it over the bridge. They can take jackhammers to the bridge all night and day trying to break it. Software engineers can certify software perfectly fine if users used it the way it was intended. Just like a structural engineer and a bridge. It's like complaining to structural engineers that their bridge isn't secure when a foreign army drops 500lb bombs all over it. You never asked us to make it resistant to 500lb bombs! (certainly didn't pay us for that, lol)

        I can probably rant about software not being math too, but i have to leave : P

        --
        SN won't survive on lurkers alone. Write comments.
        • (Score: 1, Informative) by Anonymous Coward on Wednesday April 19 2017, @07:43AM

          by Anonymous Coward on Wednesday April 19 2017, @07:43AM (#496177)

          I can probably rant about software not being math too, but i have to leave : P

          When you come back, let's do this because I believe it's the crux of this whole thing. That is, you can prove with maths that a bridge will not take certain amount of weight and similarly you can mathematically prove some algo is bullet proof.

          Yes it's expensive and yes we still should do it.

      • (Score: 2) by Wootery on Wednesday April 19 2017, @11:40AM (2 children)

        by Wootery (2341) on Wednesday April 19 2017, @11:40AM (#496247)

        Math can be done more perfectly, rigorously, and unerringly, than any other discipline undertaken by we thinking apes. Thus software "engineers" have even less excuse than structural engineers.

        I agree that the quality standards of modern software are embarrassing, but this isn't a sensible argument. Truly perfect software can be created... if you have an endless supply of time and money, and a concretely defined spec. Which almost never happens.

        In practice, software isn't really like doing math, and isn't really like bridge engineering. A better comparison is hardware. Modern CPUs are monstrously complex, but they still work reliably (though imperfectly). That said, most software projects are subject to a less stable spec than a CPU.

        Of course, the stakes and budgets are also salient. It's acceptable for a video-game to crash once every 100 hours, but it's not acceptable for a car's brakes to be hackable. Hackable routers are somewhere in the middle, but I agree it's unacceptable that it's normal for them to be laughably insecure.

        • (Score: 2) by Scruffy Beard 2 on Wednesday April 19 2017, @09:02PM (1 child)

          by Scruffy Beard 2 (6030) on Wednesday April 19 2017, @09:02PM (#496549)

          The sad part is that even safety-critical systems are built to the "good enough" rather than "proven correct" standard.

          • (Score: 2) by Wootery on Friday April 21 2017, @09:30AM

            by Wootery (2341) on Friday April 21 2017, @09:30AM (#497316)

            I wouldn't say that's our big issue. It's fairly rare that we have serious issues with software that's acknowledged to be safety-critical. Avionics software isn't required to be formally verified, but they test it to hell and back, and it rarely misbehaves.

            The issue is with the far-too-low standards that are applied to all other software, and with the failure to acknowledge that software really is safety-critical. Idiot car companies failing to isolate their cars' control systems from their entertainment systems, for instance.

    • (Score: 3, Insightful) by kaszz on Tuesday April 18 2017, @01:03PM

      by kaszz (4211) on Tuesday April 18 2017, @01:03PM (#495811) Journal

      I should be able to sue you for 7 figures.

      Just ask the medical doctors how that turned out. They do what they have to in order to make a living and avoid lawsuits. Not the best care, optimization for bread and law. And of course insurance companies and lawyers laugh all the way to the banks on how the people allowed this from the start.

      It's the same with DNS, once it went from practicality, technical merit and good judgement to money and law. Sanity went out too.

      So don't invite these law trolls to yet another area to screw around with. They only benefit politicians, corporate media, bankers, lawyers and insurers. But almost always never you or the intended party to protect. If it's regulated by law, then these type of actors will invite themselves to crash your party.

      Game theory!

  • (Score: 2) by Thexalon on Tuesday April 18 2017, @03:26AM (10 children)

    by Thexalon (636) on Tuesday April 18 2017, @03:26AM (#495661)

    We absolutely know how to build secure computer systems. We do for some very critical stuff, like the space shuttle. The thing is, we usually don't, because it's much cheaper, faster, and easier to build an insecure system and swear up and down that it's secure than it is to build a secure system. And by the time the problem shows itself, the people responsible for the decision are long gone.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @03:30AM (1 child)

      by Anonymous Coward on Tuesday April 18 2017, @03:30AM (#495664)

      A secure system can only be secure from the ground up, and only as far as its users respect that security. The space shuttle had the benefit of being secured by the entire US military-industrial complex, and the astronauts were selected to be the very best of the very best of the very best :P

      There's still no such thing as "absolutely secure".

    • (Score: 2) by frojack on Tuesday April 18 2017, @04:43AM (2 children)

      by frojack (1554) on Tuesday April 18 2017, @04:43AM (#495685) Journal

      Well, as TFS says "The basic problem is that security never gets a high-enough priority."

      I also think software in general suffers from the German Disease. If one layer of abstraction is good, three, four, or five layers must be better. Pointless complexity expands the the number of attack surfaces.

      The unix-like systems had a great deal more security built in, but even these suffer from weakness built into every single system component, partly because the the language they gravitated toward for coding didn't protect against buffer overruns and data type misuse.

       

      --
      No, you are mistaken. I've always had this sig.
      • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @07:22AM

        by Anonymous Coward on Tuesday April 18 2017, @07:22AM (#495731)

        Pfft if I wanted protection from buffer overrun and data type misuse, I'd use managed VM languages like Java or C#.

      • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @09:25PM

        by Anonymous Coward on Tuesday April 18 2017, @09:25PM (#496027)

        Yeah, I'm seeing more and more projects go the route of massive dependency injections. Updating becomes a hellscape of version control, and packages that worked before no longer work well with some other required package, etc. Short term productivity gains are preferred over long term stability and management. Node projects horrify me, even a simple theming framework can require hundreds of MB of libraries with package issues getting worse over time. OSX horrifies me, some people have computers a few years old and have to buy a new license just to install the latest web browsers. What kind of madness is that???

        I think the larger issue is that the average software needs of most users has become a "solved problem" so now the software companies are coming up with products with short shelf lives. To run the latest widget you now need an expensive update, but if you ask the actual software engineers you find out that it would be trivially easy and inexpensive for the company to patch older systems.

        Software should be designed for long term stability and security.

    • (Score: 2) by AnonTechie on Tuesday April 18 2017, @07:30AM (4 children)

      by AnonTechie (2275) on Tuesday April 18 2017, @07:30AM (#495733) Journal

      We absolutely know how to build secure computer systems. We do for some very critical stuff, like the space shuttle.

      Some additional information related to this:

      During November 1972, the Shuttle program "discovered" the IBM AP-101, a variant of the same 4Pi computer that had flown on Skylab. This 32-bit machine had 32 kilowords of memory and a floating-point instruction set, consumed 370 watts of power, weighed slightly less than 50 pounds, and cost $87,000 each.

      NASA forced one of the most stringent test and verification processes ever undertaken on IBM for the primary avionics system software. The result achieved by the 300 IBM programmers, analysts, engineers, and subcontractors was impressive. An analysis accomplished after the Challenger accident showed that the IBM-developed PASS (Primary Avionics Software System) software had a latent defect rate of just 0.11 errors per 1,000 lines of codeā€”for all intents and purposes, it was considered error-free. But this remarkable achievement did not come easily or cheap. In an industry where the average line of code cost the government (at the time of the report) approximately $50 (written, documented, and tested), the Primary Avionics System Software cost NASA slightly over $1,000 per line. A total of $500 million was paid to IBM for the initial development and support of PASS.

      Advanced Vehicle Automation and Computers Aboard the Shuttle [nasa.gov]

      --
      Albert Einstein - "Only two things are infinite, the universe and human stupidity, and I'm not sure about the former."
      • (Score: 2) by MostCynical on Tuesday April 18 2017, @10:33AM (3 children)

        by MostCynical (2589) on Tuesday April 18 2017, @10:33AM (#495775) Journal

        I expect most modern software is far less than $50 per line, and is unlikely to be tested beyond simple functional testing (did not break/did not kill anyone), and is unlikely to be documented *at all*.

        IoT stuff is likely being coded for $0.50 per line.

        A "good" router, to NASA spec, would cost $10,000,000.

        Alas, everyone goes and buys the $200 model.

        --
        "I guess once you start doubting, there's no end to it." -Batou, Ghost in the Shell: Stand Alone Complex
        • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @06:09PM (1 child)

          by Anonymous Coward on Tuesday April 18 2017, @06:09PM (#495937)

          A "good" router, to NASA spec, would cost $10,000,000.

          Alas, everyone goes and buys the $200 model.

          Which makes perfect sense, because the security of your router against attackers is simply is not worth $9 999 800. For most people I would say it's not even worth $10. These products have essentially no margin so any extra development cost will mean a higher price tag on the shelf.

          A rational person acting in their own best interest should, all else being equal, choose the cheaper router over the "more secure" one. So it's no wonder that manufacturers don't bother with security, because it represents added cost for little, if any, benefit.

          • (Score: 0) by Anonymous Coward on Wednesday April 19 2017, @07:45AM

            by Anonymous Coward on Wednesday April 19 2017, @07:45AM (#496179)

            A rational person acting in their own best interest should, all else being equal, choose the cheaper router over the "more secure" one. So it's no wonder that manufacturers don't bother with security, because it represents added cost for little, if any, benefit.

            You don't understand what that phrase means, do you?

        • (Score: 2) by urza9814 on Wednesday April 19 2017, @01:23PM

          by urza9814 (3954) on Wednesday April 19 2017, @01:23PM (#496279) Journal

          I expect most modern software is far less than $50 per line, and is unlikely to be tested beyond simple functional testing (did not break/did not kill anyone), and is unlikely to be documented *at all*.

          IoT stuff is likely being coded for $0.50 per line.

          A "good" router, to NASA spec, would cost $10,000,000.

          Alas, everyone goes and buys the $200 model.

          I suspect it's still $50/line or more, but with $40 of every $50 going to management, $5 for time the coder spent updating Excel sheets of their progress, $4.75 for time the coder spent idle/browsing the web, and $0.25 for the time spent actually writing that line of code.

          Documentation does exist IME, but usually the requirements are written after the software is completed so that they match the actual behavior, and then the code is changed six more times before release without any updates to the docs. And testing begins six hours before the code gets deployed to production.

  • (Score: 3, Insightful) by Azuma Hazuki on Tuesday April 18 2017, @04:36AM (1 child)

    by Azuma Hazuki (5086) on Tuesday April 18 2017, @04:36AM (#495682) Journal

    I'm glad that this has been pointed out even in the article title. Basically, if a business is selling something, profit trumps all else. You wanna see security? Make it more expensive to leave it out.

    --
    I am "that girl" your mother warned you about...
    • (Score: 2) by kaszz on Tuesday April 18 2017, @01:08PM

      by kaszz (4211) on Tuesday April 18 2017, @01:08PM (#495812) Journal

      There is no declaration of security problems on the package. Nor can you return it if you discover it has a vulnerable version. And as a individual consumer. Your bargaining power is minuscule.

      Top it of with closed source code and undocumented hardware.

  • (Score: 3, Insightful) by looorg on Tuesday April 18 2017, @05:36AM (6 children)

    by looorg (578) on Tuesday April 18 2017, @05:36AM (#495694)

    We do now how to build secure systems. They are usually locked up, locked down and usable only by small amounts of trained people. It would seem that the large insecurity problem comes from the large general goods and consumer market, things are supposed to be cheap and easy to use. You are stripping away security to make them, in lack of a better word, user friendly and available. Which in turn seems to spread like a digital cancer into the secure world to.

    • (Score: 3, Insightful) by NotSanguine on Tuesday April 18 2017, @06:27AM (5 children)

      We do now how to build secure systems. They are usually locked up, locked down and usable only by small amounts of trained people. It would seem that the large insecurity problem comes from the large general goods and consumer market, things are supposed to be cheap and easy to use. You are stripping away security to make them, in lack of a better word, user friendly and available. Which in turn seems to spread like a digital cancer into the secure world to.

      An excellent point. In the enterprise world, the level of security is only worth the value of the assets being protected. This is generally a pretty simple cost/benefit analysis, with resources being allotted to security proportionately. It doesn't always happen like that, since some organizations (fewer and fewer these days) are unconcerned about IT security (and often physical security as well).

      The consumer world is a vastly different animal, however. Some folks want the new shiny, others want convenience, everyone wants it cheap, and most don't give a thought to the potential security risks of web accessible garage door openers and Google Home/Alexa devices, etc., etc., etc.

      Vendors know that and expend resources on security only when they've gotten burned (bad publicity, class-action lawsuits, etc.), and not always then. This is nothing new, nor is it unique to technology. Vendors do cost/benefit analyses too, to determine whether safety or security issues should be addressed up-front, or if the potential legal liabilities are less costly.

      That sort of thinking is much, much worse when it comes to pharmaceuticals, children's toys, airbags and any number of other things which can have much direr consequences than poorly secured technology.

      Case in point, major pharmaceutical companies sold blood products they knew were tainted with AIDS [mercola.com]. More detail about this can be found here [wikipedia.org].

      --
      No, no, you're not thinking; you're just being logical. --Niels Bohr
      • (Score: 1) by pTamok on Tuesday April 18 2017, @09:38AM (2 children)

        by pTamok (3042) on Tuesday April 18 2017, @09:38AM (#495766)

        I really, really wouldn't use mercola.com as a reference. It actively detracts from the credibility of what you are saying (and the contaminated blood-products story is worth saying).

        https://sciencebasedmedicine.org/joe-mercola-quackery-pays/ [sciencebasedmedicine.org]
        http://www.quackwatch.com/11Ind/mercola.html [quackwatch.com]
        http://scienceblogs.com/insolence/2012/08/03/15-years-of-promoting-quackery/ [scienceblogs.com]

        • (Score: 4, Interesting) by NotSanguine on Tuesday April 18 2017, @04:46PM (1 child)

          I really, really wouldn't use mercola.com as a reference. It actively detracts from the credibility of what you are saying (and the contaminated blood-products story is worth saying).

          Thanks for the heads up. I was unaware that Mercola (I hadn't seen it before) was a quackery website. I only used it as it was high up in the search results. :(

          However, that particular story is true. What's more, back in the 1970s, Bayer and other vendors of Clotting Factor VII knew that there was a risk that their products were tainted (with hepatitis and other blood-borne diseases as well as what would later be called HIV) and didn't test the blood used for their products.

          Given that Factor VII is made from human blood plasma [cdc.gov] (factor VIII is not, and wasn't approved for use in the US until 1992, hmm I wonder why research into that really took off after the mass infection and death of hemophiliacs around the world?), there was always that risk. Bayer, et al decided that paying off settlements was cheaper than testing the plasma used to make factor VII.

          Large numbers of hemophiliacs (who require clotting factor), including my brother in-law, in the US and elsewhere were infected with HIV. Since protease inhibitors [wikipedia.org] were unavailable until late in 1996, most of those folks died slow, painful deaths.

          In any case, that's just one example of corporations deciding that safety was too expensive. And it caused many thousands to die slowly, painfully and unnecessarily.

          --
          No, no, you're not thinking; you're just being logical. --Niels Bohr
          • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @09:32PM

            by Anonymous Coward on Tuesday April 18 2017, @09:32PM (#496029)

            Which would have been trivially solved if every individual was able to negotiate their own contracts with...

            Oh who am I kidding, how would that possibly have stopped this problem?

            "I'm sorry but you didn't sign up for the premium plan which costs only 5000% more so we're not liable for tainted products. Good luck with your AIDS, we do offer a treatment program that costs more per year than you make in a lifetime, but good news it comes with an indentured service clause. If you miss a payment you simply get enrolled in our work-to-live program and we keep your treatments going for the duration of the contract." Queue poor suckers (what you don't wanna die???) being shipped to a mining colony and living the rest of their life in abject misery.

      • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @06:45PM (1 child)

        by Anonymous Coward on Tuesday April 18 2017, @06:45PM (#495958)

        An excellent point. In the enterprise world, the level of security is only worth the value of the assets being protected. This is generally a pretty simple cost/benefit analysis, with resources being allotted to security proportionately.

        Not simply the value of the asset: security is worth the cost of recovery due to a breach times the likelihood of such a breach occurring.

        Very simple example: I need a snow shovel to clear my walkway, and I am considering what will happen if my shovel is stolen.

        If my shovel is stolen, it will cost me $20 plus a trip to the store to buy a new one. This will probably take about 15 minutes, so if I value my time at $20/hr the total cost of recovery is $25.

        Now, I can't know the exact probability of my shovel being stolen. Where I live it is probably not 0 but should be close to it because I have never heard of snow shovels walking away on their own. So I will have to make up a number, say 0.01 (estimating that one out of every 100 shovels will be stolen).

        With those two estimates, I can conclude that securing my shovel is worth about $0.25. Since I valued my time at $20/hr, this means I am wasting my time securing my shovel if it takes more than 45 seconds over the entire lifetime of the shovel.

        Pretty much any recurring inconvenience will add up to more than 45 seconds over the shovel's lifetime. Therefore, I should maximize availability by leaving the shovel unsecured, close to where I will need it.

        • (Score: 0) by Anonymous Coward on Tuesday April 18 2017, @09:35PM

          by Anonymous Coward on Tuesday April 18 2017, @09:35PM (#496031)

          A clear example of why we need publicly funded police (discourage crime) and legislation to require software security. If left up to spreadsheets we should kill off the vast majority of human beings instead of trying to fix our systemic issues. Bean counters are the worst and should be sent to the backseat instead of running Wallstreet and screwing over everyone else just to get some more beans.

  • (Score: 2) by MichaelDavidCrawford on Wednesday April 19 2017, @12:57AM (1 child)

    by MichaelDavidCrawford (2339) Subscriber Badge <mdcrawford@gmail.com> on Wednesday April 19 2017, @12:57AM (#496086) Homepage Journal

    I need passwords that I can easily remember.

    I used to use strong passwords. I kept them in an encrypted document, but then I forgot its master password. I'm not always able to recover my passwords - for example I had to start a second facebook account because I don't remember the name of the street I lived on as a child.

    Now that I'm old and cranky, I just want to use the computer, I don't want to cater to its needs.

    --
    Yes I Have No Bananas. [gofundme.com]
    • (Score: 2) by Scruffy Beard 2 on Wednesday April 19 2017, @09:17PM

      by Scruffy Beard 2 (6030) on Wednesday April 19 2017, @09:17PM (#496557)

      if you still have the document, you may be able to use modern CPU/GPU time to crack it.

      I have some Bitcoin locked up because I managed to mis-type the password twice. Or, alternatively Mike Hearn's lack of attention to detail bit me in the ass for using a foreign character-set.

(1)