Stories
Slash Boxes
Comments

SoylentNews is people

posted by cmn32480 on Monday October 03 2016, @07:29PM   Printer-friendly
from the inherently-broken dept.

Arthur T Knackerbracket has found the following story from Bruce Schneier's blog:

Every few years, a researcher replicates a security study by littering USB sticks around an organization's grounds and waiting to see how many people pick them up and plug them in, causing the autorun function to install innocuous malware on their computers. These studies are great for making security professionals feel superior. The researchers get to demonstrate their security expertise and use the results as "teachable moments" for others. "If only everyone was more security aware and had more security training," they say, "the Internet would be a much safer place."

Enough of that. The problem isn't the users: it's that we've designed our computer systems' security so badly that we demand the user do all of these counterintuitive things. Why can't users choose easy-to-remember passwords? Why can't they click on links in emails with wild abandon? Why can't they plug a USB stick into a computer without facing a myriad of viruses? Why are we trying to fix the user instead of solving the underlying security problem?

Traditionally, we've thought about security and usability as a trade-off: a more secure system is less functional and more annoying, and a more capable, flexible, and powerful system is less secure. This "either/or" thinking results in systems that are neither usable nor secure.

[...] We must stop trying to fix the user to achieve security. We'll never get there, and research toward those goals just obscures the real problems. Usable security does not mean "getting people to do what we want." It means creating security that works, given (or despite) what people do. It means security solutions that deliver on users' security goals without­ -- as the 19th-century Dutch cryptographer Auguste Kerckhoffs aptly put it­ -- "stress of mind, or knowledge of a long series of rules."

[...] "Blame the victim" thinking is older than the Internet, of course. But that doesn't make it right. We owe it to our users to make the Information Age a safe place for everyone -- ­not just those with "security awareness."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by tangomargarine on Tuesday October 04 2016, @02:28PM

    by tangomargarine (667) on Tuesday October 04 2016, @02:28PM (#410004)

    unpatched exploit (zero day)

    Technically incorrect usage. A zero-day is an exploit for which a patch *doesn't yet exist*, not just one you yourself haven't patched yet.

    But the whole term irritates me because by definition like 90% of exploits are "zero-day" when they appear.

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by vux984 on Tuesday October 04 2016, @04:37PM

    by vux984 (5045) on Tuesday October 04 2016, @04:37PM (#410064)

    Technically incorrect usage.

    It was intended as more of an either/or/"whatever"/"pick your poison" usage, but I'll concede that doesn't really come through.

    However a zero-day isn't really an exploit for which a patch "doesn't yet exist"; although not having a patch for it is implied since there have been zero days to produce one. A zero day is an exploit for a flaw that hasn't been *recorded* yet. As in "This software flaw this exploit has been using has been known about for zero days."

    The point is there is a class of exploits for which a patch doesn't exist yet, but which are not 'zero days' either. For example, if I report an exploitable flaw to Cisco, and they don't release a patch for it for 60 days. The flaw is not a "zero day" a month after I report it, but a month before a patch is available.

    Additionally, its not necessarily a zero day the day I report it to cisco; because zero-day is supposed to be for exploits that are active in the wild. A responsibly discovered and disclosed vulnerability that isn't actively being exploited is never technically a zero day.

    Of course the media will call pretty much anything a zero-day and language reflects uagage so technical debates about what is and is not a zero day is just tilting at windmills.

    • (Score: 2) by tangomargarine on Tuesday October 04 2016, @05:13PM

      by tangomargarine (667) on Tuesday October 04 2016, @05:13PM (#410094)

      Of course the media will call pretty much anything a zero-day and language reflects uagage so technical debates about what is and is not a zero day is just tilting at windmills.

      Call me Don "Prescriptivist, Dammit!" Quixote :)

      I'd think with the number of programmers we have around here we'd have a few more prescriptivists. If you use the word wrong/wrong word in programming, it just plain doesn't work.

      --
      "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"