Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday November 16 2019, @11:38PM   Printer-friendly
from the poker-analogies dept.

Arthur T Knackerbracket has found the following story:

Common behaviors shared across all families of ransomware are helping security vendors better spot and isolate attacks.

This according to a report from British security shop Sophos, whose breakdown (PDF) of 11 different malware infections, including WannaCry, Ryuk, and GandCrab, found that because ransomware attacks all have the same purpose, to encrypt user files until a payment is made, they have to generally perform many of the same tasks.

"There are behavioral traits that ransomware routinely exhibits that security software can use to decide whether the program is malicious," explained Sophos director of engineering Mark Loman.

"Some traits – such as the successive encryption of documents – are hard for attackers to change, but others may be more malleable. Mixing it up, behaviorally speaking, can help ransomware to confuse some anti-ransomware protection."

Some of that behavior, says Loman, includes things like signing code with stolen or purchased certificates, to allow the ransomware to slip past some security checks. In other cases, ransomware installers will use elevation of privilege exploits (which often get overlooked for patching due to their low risk scores) or optimize code for multi-threaded CPUs in order to encrypt as many files as possible before getting spotted.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Insightful) by edIII on Sunday November 17 2019, @12:03AM (5 children)

    by edIII (791) on Sunday November 17 2019, @12:03AM (#921121)

    Ransomware has almost no chance with a journaling file system combined with a proper backup policy. I have zero knowledge backup with 30 days of versions kept. I can restore a document across 200+ points in time in some cases.

    With such systems ransomware needs to stay hidden, and keep its operations hidden, for weeks. Not saying that's impossible, but not likely either. If you kept offline copies every 90 days, incidents would be a heck of lot less damaging and easier to recover from.

    You can treat ransomware as file corruption, and there are plenty of good methods to mitigate file corruption.

    --
    Technically, lunchtime is at any moment. It's just a wave function.
    Starting Score:    1  point
    Moderation   +2  
       Insightful=1, Interesting=1, Total=2
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   4  
  • (Score: 2) by barbara hudson on Sunday November 17 2019, @12:18AM (2 children)

    by barbara hudson (6443) <barbara.Jane.hudson@icloud.com> on Sunday November 17 2019, @12:18AM (#921126) Journal
    I doubt a journaling file system would help. Better to just keep frequent backups.

    Of course, keeping your computer off the net helps.

    --
    SoylentNews is social media. Says so right in the slogan. Soylentnews is people, not tech.
    • (Score: 2) by edIII on Sunday November 17 2019, @12:44AM (1 child)

      by edIII (791) on Sunday November 17 2019, @12:44AM (#921132)

      The journaling filing system helps mitigate file corruption, which is what ransomware really is. At this point I wouldn't consider a file system that didn't perform journaling.

      Good point about backups though. They're definitely a major part of it, but not just frequency. Duration of the backups, or when you start to cycle backup media, is very important too. Frequent snapshots of important data allow you to effectively playback the changes, but like you alluded to, they should be airgapped.

      Now thinking about it, protecting filing systems is a little more difficult than protecting databases. In the latter it's a lot easier stream transactions to a backup that is regularly backed up off site. Ransomware isn't the reason driving database backup policies to heavily mitigate corruption. I've seen situations in which corruption is happening slowly each day to large databases, and nobody noticed for 6 months. Only way I was able to recover anything were backup copies from before and during the corruption.

      Fundamentally, I think that's what will protect us against ransomware. Having many different versions of the data across time. Which has many benefits beyond ransomware protection.

      --
      Technically, lunchtime is at any moment. It's just a wave function.
      • (Score: 1, Informative) by Anonymous Coward on Sunday November 17 2019, @07:18AM

        by Anonymous Coward on Sunday November 17 2019, @07:18AM (#921203)

        No, Journaling file systems do nothing to prevent FILE corruption. They are designed to prevent FILE SYSTEM corruption. Specifically, they are designed to keep the different metadata structures of the disk in a consistent state. Ordered journals with barriers have the additional design benefit of keeping the metadata in a consistent state with the actual file content. Full journals have the additional design benefit of preventing some data writes from getting lost (which might incidentally cause corruption if interrupted) on replay of the journal. But do note that only full journaling prevents such incidental file corruption on overwrites.

        None of these do anything to prevent your data from being changed in-place from raw writes, cosmic rays, or whatever. They also don't allow you to recover from any errors in the writes either. They definitely don't prevent damage from otherwise acceptable commands from your software, especially those that complete successfully.

  • (Score: 0) by Anonymous Coward on Sunday November 17 2019, @01:30AM (1 child)

    by Anonymous Coward on Sunday November 17 2019, @01:30AM (#921139)

    A log file system is what you are thinking of, I think. These are often continuously snapshotting, since no writes are done in place.

    • (Score: 0) by Anonymous Coward on Sunday November 17 2019, @06:43AM

      by Anonymous Coward on Sunday November 17 2019, @06:43AM (#921201)

      I think they are actually thinking of the more common Copy-on-Write file systems like ZFS or btrfs. CoW file systems work, essentially, by storing a new copy of each file on write and have many characteristics in common with incremental backups. In fact, many CoW systems explicitly allow you to save the past X writes to allow such rollbacks. (If they weren't thinking of CoW, then it would seem they have a misunderstanding of the nature of ransomeware attacks and/or how journaling systems work).

      It is also worth noting that in addition to CoW, log based, soft update, and journal file systems can also be used to restore old version, if you had enough space for the data and file structures. Now, with a journaling file system it is much trickier as it depends on whether you have full journaling (not just logical journaling) enabled and a large enough journal as to whether your journal would still have the data you need after that much writing. CoW systems would depend on having a high enough snapshot threshold. And, finally, log and the more common of the soft update file systems would depend on having enough free space on the drive to prevent the system from reclaiming the space, as most don't have a snapshot threshold.