Stories
Slash Boxes
Comments

SoylentNews is people

posted by n1 on Tuesday August 05 2014, @02:46PM   Printer-friendly
from the industry-leading-failings dept.

The Register and others are reporting that Synology NAS units are being hit with a ransomware package that encrypts users' files and charges them money to unlock; affected users logging into the web interface will see a message saying "All important files on this NAS have been encrypted using strong crypotgraphy". Currently there is no fix listed to patch the underlying vulnerability.

More information can be found on this Synology forums thread; if you're affected, turn your Synology off now. If you expose your NAS to the outside world through UPnP or port forwarding, now would be a good time to disable those rules.

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by Jaruzel on Tuesday August 05 2014, @04:20PM

    by Jaruzel (812) on Tuesday August 05 2014, @04:20PM (#77644) Homepage Journal

    Off the shelf NASs sell convenience over data-security. I know of an IT-literate guy who 'just wanted something that worked' as storage for all his 10,000s of semi-pro photos. So he bought a well known NAS brand with very large disks in it. He set up a RAID array and felt his data was very secure and safe.

    Fast forward to just over a year later, and the motherboard in the NAS went bang. No problem he thought, 'I'll just pop the array onto a Linux box to recover my data.'

    Nope. The NAS had formatted the RAID array with some bespoke secret formatting system, and no amount of fiddling in Linux would talk to it. Not being able to just buy a replacement motherboard, he eventually had to buy a replacement NAS of the same model via eBay for ~$500 and swap his disks into it.

    ...

    Moral: Just build a bloody Linux/Windows server, put some disks in it, and run software RAID*.

    -Jar

    *This is what I do. I've survived 2 server failures without losing any data.

    --
    This is my opinion, there are many others, but this one is mine.
    Starting Score:    1  point
    Moderation   +3  
       Interesting=2, Informative=1, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 3, Insightful) by datapharmer on Tuesday August 05 2014, @05:05PM

    by datapharmer (2702) on Tuesday August 05 2014, @05:05PM (#77658)

    Shame on you for suggesting software raid. The proper solution is a hardware raid and a) buy a warranty that covers replacement parts as quickly as you need them available or b) have spare parts on hand.

    This is simply the cost of doing business. Software raid can have huge performance penalties, doesn't support hot swapping drives, and can leave you with a corrupt mess if you get into certain failure scenarios.

    • (Score: 3, Interesting) by MrNemesis on Tuesday August 05 2014, @05:32PM

      by MrNemesis (1582) on Tuesday August 05 2014, @05:32PM (#77672)

      I run software RAID on all my home servers, and it very definitely does support hot-swapping. Most bog-standard software RAID (common in every linux-based NAS I've used) doesn't use any proprietary disc signatures either so I'm curious if the GP will name names as to his NAS vendor.

      I've had far worse luck with hardware RAID which, when the cards do go pop, you'll frequently need to buy a whole new RAID card of the same family to go with it that will frequently cost as much as a small server, whereas with linux softraid you can just plug the drives into your nearest linux box (or generic box with a USB bootable linux distro) and providing you have enough SATA controllers, mdadm will detect the discs, create the array and leave it for you to mount someplace at which point you can splurge the files off onto another medium of your choice

      The performance penalty of software RAID vs. hardware RAID is, for me at least, unnoticeable. Most of the performance problems I've seen from people running softraid is the awful SATA controllers that you get on some consumer motherboards - the silicon image ones back in the day, or the marvell ones these days, are utterly abysmal at random IO. If you only need four spindles then you can usually find motherboards with the excellent intel SATA controllers or the much-better-than-people-think AMD controllers with enough ports; if you need more than that you're generally better off buying a cheap LSI HBA (or better still, buy a cheap IBM M1015 HBA and reflash it to the LSI 9211-8i) and avoiding the chipset controllers altogether.

      IMHO the only unquestionable benefit that hardware RAID brings are twofold: first is that the cache memory is always, always ECC whereas with softraid, too many people run it on standard non-ECC platforms and expect that'll be fine for their always-on file server that has 95% of its memory full of FS cache... but flip a bit in that cache, read it from a client and then write the flipped bit back to disc and you'll silently corrupt data. This is bad enough for "normal" filesystems but with ones like ZFS, the busted parity is enough to ruin your whole pool. Second is that RAID cards almost always come with a hookup to the backplane that'll helpfully shine a nice flashy light on your disc tray saying "this disc has failed"; with linux and its propensity to swap disc IDs around on every single boot (the drive you put in as /dev/sdg last week might be /dev/sdj now - I've not found a way of coaxing udev into naming discs by port number) I find myself being very, very careful about which disc to remove. This can be mitigated to a degree by using an "active" backplane which'll give you a better idea of what the discs are doing but is still far from perfect. But it still beats paying £350+ for a decent HW RAID card.

      My £0.02.

      --
      "To paraphrase Nietzsche, I have looked into the abyss and been sick in it."
      • (Score: 0) by Anonymous Coward on Wednesday August 06 2014, @01:35AM

        by Anonymous Coward on Wednesday August 06 2014, @01:35AM (#77856)

        Mount by UUID isn't helpful to you in this use case?

        • (Score: 2) by MrNemesis on Wednesday August 06 2014, @06:39AM

          by MrNemesis (1582) on Wednesday August 06 2014, @06:39AM (#77913)

          All discs in a softraid use the same UUID - or rather, it's the filesystem that has a UUID and not the underlying block devices. If there's a way of telling udev that SCSI0:1:8 should always be /dev/sdx I haven't found it so if a disk does go wrong you'll generally have to futz around with udev and /sys to find out which SCSI port the device is sitting on and then backtrack to a bay number from there. Annoying.

          --
          "To paraphrase Nietzsche, I have looked into the abyss and been sick in it."
    • (Score: 0) by Anonymous Coward on Wednesday August 06 2014, @01:30AM

      by Anonymous Coward on Wednesday August 06 2014, @01:30AM (#77854)

      Ah hah hah hah hah ... so funny.

      'proper' ... Ah hah hah hah hah

      'simply the cost of doing business' ... Ah hah hah hah hah

      'doesn't support hot swapping drives' ... Ah hah hah hah hah

      Been away for a while, for more than a decade or so, have you?

      I invite you to look into btrfs, lvm, hot swap bays, ...

      Software will always stay more current than hardware, and on a NAS, you're limited by the net speed anyways.

      On a bet your business, revenue generating installation, your points may be correct ... but then you wouldn't be using such. Which is all to say, you're not talking the use case of the OP.

      Yep, go 'software' (after all, a hardware solution doesn't have any software, do it?)

      JUST WHAT DO YOU THINK IS OPERATING THE HARDWARE RAID SOLUTIONS, ANYWAYS!!!

      Go see FreeNAS, worst case, and get on with your day.

      Oh ... and ... 'leave you with a corrupt mess if you get into certain failure scenarios.' ... umm, not unique to any solution, all will have their issues. They'll just be different ones.

    • (Score: 0) by Anonymous Coward on Wednesday August 06 2014, @01:37AM

      by Anonymous Coward on Wednesday August 06 2014, @01:37AM (#77857)

      And if your hardware raid card dies, you had better hope you have a spare.
      And you'll be back searching for hardware on ebay.
      Software raid will work on any hardware.

    • (Score: 2) by sjames on Wednesday August 06 2014, @05:57PM

      by sjames (2882) on Wednesday August 06 2014, @05:57PM (#78127) Journal

      Good luck with that. Sometimes the exact same HW with a different firmware revision won't recognize your old array. Even worse, it may 'helpfully' decide to reformat the RAID without even asking in order to make sure your data is good and dead.

      I refuse to use any RAID where the on-disk format isn't publicly documented.

  • (Score: 2) by cafebabe on Wednesday August 06 2014, @07:42AM

    by cafebabe (894) on Wednesday August 06 2014, @07:42AM (#77930) Journal

    A while back, it was possible for consumers to buy a four disk RAID10 system. However, a system so small is dangerous - even ignoring the consequences of RAID management by people saying "What's this flashing red light mean?" The MTBF [wikipedia.org] of four disks from one batch is the same as one disk. However, the MTBF of the proprietary RAID system is what makes it dangerous. If that fails first, data recovery could be dicey.

    I've just investigated Western Digital's My Cloud [wdc.com] and I can safely say that it is a case of Do Not Want. Two disk proprietary RAID is a really quick way of losing your data. If you use it for video, the only useful feature is Jumbo Frames. However, it ships with Wake-On-LAN, Peer-To-Peer clients, IceCast, PHPBB, WordPress, Joomla, PHPMyAdmin and integration with at least two insecure public clouds.

    --
    1702845791×2