Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Thursday October 29 2015, @03:21PM   Printer-friendly
from the need-a-penalty-box dept.

Bruce Schneier's blog talks about the recent hack of CIA director John O. Brennan's AOL account (among others) and says when it comes to social engineering attacks:

The problem is a system that makes this possible, and companies that don't care because they don't suffer the losses. It's a classic market failure, and government intervention is how we have to fix the problem.

It's only when the costs of insecurity exceed the costs of doing it right that companies will invest properly in our security. Companies need to be responsible for the personal information they store about us. They need to secure it better, and they need to suffer penalties if they improperly release it. This means regulatory security standards.

Schneier goes on to suggest the government should establish minimum standards for results and let the market figure out the best way to do it. He also partly blames consumers because they demand any security solutions be easy to use, ending with:

It doesn't have to be this way. We should demand better and more usable security from the companies we do business with and whose services we use online. But because we don't have any real visibility into those companies' security, we should demand our government start regulating the security of these companies as a matter of public safety.

Related: WikiLeaks Publishes CIA Chief's Personal Info


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by skater on Thursday October 29 2015, @04:43PM

    by skater (4342) on Thursday October 29 2015, @04:43PM (#256099) Journal

    I have a slightly different idea.

    If my info is stolen from your company, you owe me $100. If that info includes SS#, it goes up to $500. Every single person affected by the breach gets this.

    If my stolen info is then used fraudulently, you owe me $10,000 plus all of the costs of cleaning up the mess.

    Make it too expensive to have lax security.

    Starting Score:    1  point
    Moderation   +2  
       Insightful=2, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 0) by Anonymous Coward on Thursday October 29 2015, @04:46PM

    by Anonymous Coward on Thursday October 29 2015, @04:46PM (#256102)

    How about they are liable for damages done by their leak. If an information leak costs me $1000, then they reimburse me that 1000.

    • (Score: 3, Insightful) by Anonymous Coward on Thursday October 29 2015, @05:13PM

      by Anonymous Coward on Thursday October 29 2015, @05:13PM (#256115)

      Good luck proving and collecting. Ever have a car accident? You never get the cost of your time back, and you never get a loss of value on your car.

  • (Score: 3, Insightful) by edIII on Thursday October 29 2015, @06:52PM

    by edIII (791) on Thursday October 29 2015, @06:52PM (#256167)

    HUGE problem with that. Ummm, so.... how do you secure anything again? You can't just create blanket penalties like that or everyone will leave the market screaming and your packets will need to go to another country for service. I most certainly would have nothing to do with your system, as the risks are completely insane for a business. Penalties at that level are death sentences. The minimum penalty of $100 would be sufficient for a corporation making average monthly revenue per customer at $100. The $10,000 is akin to 100 months of revenue, or just plain fuck you and go bankrupt.

    You almost act like computers and networks can be secured at all ;)

    About the only way your idea can work, is through something like PCI compliance. Corporations would need a liability waiver *if*, and *only if*, they had current certifications. None of those fines can stick as long the corporation was doing what this "security group" recommended. Otherwise, can you see any possible way I could guarantee the security of your data? I can't. Neither can I afford the hundreds of millions of dollars, at a minimum, that it would take for me to hire all the *expert* programmers to audit ALL OF THE CODE EVERYWHERE. Security starts at the very very lowest level, and all of those are currently binaries/blobs locked up by Intel. Purism is going to free it, but until you have free and open computing that can be *openly* audited, you can never actually be sure of anything. Even then, being somewhat sure will probably cost upwards of a billion. That's all I get... a feeling of confidence that I won't wake up one day with hundreds of millions of dollars in fines and my corporation gone (unless it's Too-Big-To-Fail and I got those Senators the blow and hookers they wanted).

    Considering that, you really really need a method for the corporation to pass liability on instantly to their own providers. Meaning, you sue me, and then I immediately pass the suit to the group responsible for LibreSSL as they were responsible for the bug that allowed the attackers to weaken our SSH security and penetrate our networks. I have to be able to shift legal standing in front of judge and just walk out without penalties. How will that work when LibreSSL has licensing that forces to me to absorb it? Back to square one where I'm screwed for their mistakes. Nobody would ever accept free software anymore, as it most certainly would not be free anymore.

    Even if you succeed in getting the massive security group together.... yay! We now have a regulatory agency that can easily punish us if we *refuse* to patch our software, or *refuse* a software update... just like PCI compliance can do now. Congratulations! Innovation just died a grisly death. I'm not going to innovate shit under that regulatory agency when my innovations are not authorized and approved by them. I'm risking $10,000 per customer ID in my database by thinking I can do anything cool, correct, and possibly profitable. I would probably just need to go work for the security group to be safe.

    All of the innovation will begin to rest with a few software corporations that have the *right* relationships with this new regulatory agency with the powers to dictate software on private systems. Which sounds much like DNS and it's current fuckery.

    Sorry, but security is so horribly bad right now it's simply unfair to punish anyone specifically. To me it's a much smarter idea for the government to create a budget of billions of dollars to do precisely what I said: Security Audit our fucking software! Including the proprietary platforms because they would be the government and secure access could be *demanded*.

    It would behoove government to "unfuck" this situation, as many of the greatest hacks and security fallouts arise from *state sponsored* attacks on networks and data. Do you really want to destroy my corporation because the U.S government secured a 0-day for its intelligence networks, but didn't secure itself, so the criminals are using government tools against me?

    We're past simple fines solving this problem by forcing me to do something. You need to tell me what I can do first dude! :)

    --
    Technically, lunchtime is at any moment. It's just a wave function.
    • (Score: 1, Interesting) by Anonymous Coward on Thursday October 29 2015, @10:24PM

      by Anonymous Coward on Thursday October 29 2015, @10:24PM (#256254)

      Perfect is the enemy of good. How about punishing companies that have shown to be absolutely incompetent when it comes to security, like Sony? Don't demand that people have perfect security; demand that they're not totally incompetent.

      • (Score: 2) by Tramii on Thursday October 29 2015, @11:35PM

        by Tramii (920) on Thursday October 29 2015, @11:35PM (#256271)

        How about punishing companies that have shown to be absolutely incompetent when it comes to security, like Sony?

        How do you make that work without making it a completely arbitrary process? We prosecute Sony but let Target off the hook? Who get's the decide what is due to "incompetence"? Does incompetence mean that it was the programmer's first time writing secure software? Or does it mean that we expect better security from "big" companies vs. smaller companies?

    • (Score: 3, Funny) by Snotnose on Thursday October 29 2015, @11:16PM

      by Snotnose (1623) on Thursday October 29 2015, @11:16PM (#256264)

      Not cool dude, not cool at all. I was all YAY!!! Sue the lazy bastids!!! Now you go give a cogent reason why that won't work.

      Not cool dude.

      --
      When the dust settled America realized it was saved by a porn star.
    • (Score: 4, Insightful) by Tramii on Thursday October 29 2015, @11:30PM

      by Tramii (920) on Thursday October 29 2015, @11:30PM (#256270)

      You can't just create blanket penalties like that or everyone will leave the market screaming and your packets will need to go to another country for service.

      First of all, not everyone will leave the market. It will definitely discourage any company from collecting any information they absolutely do not require. This is a feature, not a bug. Lots of companies will stop asking for your personal data. Most companies will stop writing custom software to handle things like charging credits cards, and a few huge companies who have the expertise and know-how will end up handling most of our sensitive data. This all seems like a good thing to me.

      Second, I do not accept the excuse that it is impossible to secure information. (I mean, it literally is impossible to 100% guarantee that no one can steal something from you. But we've lived with this possibility for a long time and society hasn't collapsed yet.) It is certainly possible to eliminate 99% of the information leaks that have happened in the last few years. With proper security practices, you could get some hardened systems running in a few years. Let the companies buy insurance or whatever they need to mitigate the risk, but you don't let people off the hook because something is "hard".

      • (Score: 2) by mojo chan on Friday October 30 2015, @11:02AM

        by mojo chan (266) on Friday October 30 2015, @11:02AM (#256410)

        Companies will just buy insurance to cover the cost. The question that business will ask is always "given a rate of x hacks/year, is it cheaper to pay the fines, get insurance to pay the fines or improve security?"

        I'd prefer a system where profits are garnished. If a company is hacked there is an investigation. If they failed to encrypt the data properly, if people lost money as a result the fine is higher. The fine is always a multiple of yearly profits, so it scales with the business. Profits are garnished until the fine is paid off.

        --
        const int one = 65536; (Silvermoon, Texture.cs)
    • (Score: 2) by darkfeline on Friday October 30 2015, @12:35AM

      by darkfeline (1030) on Friday October 30 2015, @12:35AM (#256292) Homepage

      Yes, I am utterly devastated by the lack of E Coli innovation in my spinach caused by compliance to government regulations. Think of all the cool stuff I could be cultivating in my body!

      Viva innovation!

      --
      Join the SDF Public Access UNIX System today!
    • (Score: 2) by iamjacksusername on Friday October 30 2015, @06:59AM

      by iamjacksusername (1479) on Friday October 30 2015, @06:59AM (#256364)

      I personally have been in favor of per-person statutory damages awarded on a strict liability basis. That is, the basis of the award is that a breach occurred, not whether the company is at fault. On a practical basis, it would be a death knell for company. Going forward, companies would stop storing people's personal information . The reason why we have companies able to exploit massive amounts of data is because there is very little capital cost for holding the data. The cost for Facebook to track me is close enough to zero to be almost non-existent.

      There is a tremendous social cost though- we are seeing it with mass invasion of privacy, automated tracking of everybody, facial recognition, etc... There is a famous scene in enemy of the state where Will Smith is being tracked in real-time via cameras. This exists now - tow truck drivers all over the country use an automated license plate recognition system tied to a camera in their cars. It automatically checks the license plate on every car for it sees for repo against a central database logging time and location. Police departments can buy access to this database. Facial recognition, I am certain, is not far behind. We are heading full steam ahead into a surveillance society and nobody wants to stop it because these costs are being externalized. By creating statutory fines, we make companies (and people!) incur these costs.

      What will happen? A lot of companies will no longer store your data. Companies that still do will look long and hard at what they store. The more egregious, privacy invading data mining businesses will no longer be viable since they rely on a near-zero cost for data acquisition. And that's not a bad thing.

    • (Score: 2) by skater on Friday October 30 2015, @02:58PM

      by skater (4342) on Friday October 30 2015, @02:58PM (#256486) Journal

      I see your point, but as others said: The point is to push the companies toward reducing what they store and to make sure a breach does cost them something. Until security breaches start seriously affecting profit, nothing will change.

      Sorry, but security is so horribly bad right now it's simply unfair to punish anyone specifically. To me it's a much smarter idea for the government to create a budget of billions of dollars to do precisely what I said: Security Audit our fucking software! Including the proprietary platforms because they would be the government and secure access could be *demanded*.

      Would that be the same government that had 25 million records stolen recently? They're one of the worst offenders.

      • (Score: 2) by edIII on Friday October 30 2015, @09:28PM

        by edIII (791) on Friday October 30 2015, @09:28PM (#256678)

        I'm not asking the government to audit the code. That would be like Little Red Riding Hood asking the Big Bad Wolf to make her dinner. Yes, he makes dinner, but she won't like it :)

        I very much want companies to change too, I'm only trying to be reasonable and practical about what they could change. It's a lot to ask, and quite frankly there is just too much work for the market to care of at this point. We painted ourselves into a corner in a room so vast, that it's insurmountable for any single entity. I take you very seriously, and I work with databases and systems like these. When it comes to CC processing and the like, there is very very very big reason why nearly every single payment gateway has something like a 'customer vault' where I only have a random identifier for the customer file, and can order pre-approved payment plans and packages against it. I deal with CC numbers exactly once, they're encrypted in the browser, and nobody ever sees it again. No Bins, Last 4, CVV2, magstripe info, nothing is stored on my end. Doing so largely eliminates our own liability, and shifts PCI compliance to the payment gateway. We're still required to have PCI compliance on our end *technically*, but the failure for doing so disseminates no financial information about my customers in the event of a breach.

        That's why I mention PCI compliance since they are specifically in the business of auditing specific code and processes to engender security in financial transactions. That program *could* vastly expand, but I *loathe* Big Gubbermint. If it's executed ineptly, as government has a penchant for doing, we won't be any better off at all.

        No, the government will only *pay* recognized experts in the field to operate with complete transparency and begin auditing all of our code bases in use. Adobe wants to throw a shit fit about it as an example, you throw their executives in prison for non-compliance. Security through obscurity is a complete failure, and I only support operating with them in a NDA like fashion. We'll keep their proprietary secrets for them, but we will still be looking at code and advising them. On that note, Adobe could *never* sue a security researcher again, or have their corrupt friends in the FBI arrest them. Security researchers receive immunity while working for the government.

        That would never happen for two reasons:

        1) It would be extremely effective. Billions could hire us a crew of tens of thousands of experts worldwide, and I believe we would see very impressive results in 12-24 months even. FOSS would become very secure, and proprietary platforms could at least be trusted as far as security was concerned.

        2) Nobody in power wants effectiveness at all. That's bad for the insurance industry and intelligence communities, as well as not exactly giving the kind of storm trooper control the government likes. You make more money in war time, then you do in peace time.

        --
        Technically, lunchtime is at any moment. It's just a wave function.