Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday November 30, @09:12AM   Printer-friendly

https://www.xda-developers.com/your-unpowered-ssd-is-slowly-losing-your-data/

SSDs have all but replaced hard drives when it comes to primary storage. They're orders of magnitude faster, more convenient, and consume less power than mechanical hard drives. That said, if you're also using SSDs for cold storage, expecting the drives lying in your drawer to work perfectly after years, you might want to rethink your strategy. Your reliable SSD could suffer from corrupted or lost data if left unpowered for extended periods. This is why many users don't consider SSDs a reliable long-term storage medium, and prefer using hard drives, magnetic tape, or M-Disc instead.

Unlike hard drives that magnetize spinning discs to store data, SSDs modify the electrical charge in NAND flash cells to represent 0 and 1. NAND flash retains data in underlying transistors even when power is removed, similar to other forms of non-volatile memory. However, the duration for which your SSD can retain data without power is the key here. Even the cheapest SSDs, say those with QLC NAND, can safely store data for about a year of being completely unpowered. More expensive TLC NAND can retain data for up to 3 years, while MLC and SLC NAND are good for 5 years and 10 years of unpowered storage, respectively.

The problem is that most consumer SSDs use only TLC or QLC NAND, so users who leave their SSDs unpowered for over a year are risking the integrity of their data. The reliability of QLC NAND has improved over the years, so you should probably consider 2–3 years of unpowered usage as the guardrails. Without power, the voltage stored in the NAND cells can be lost, either resulting in missing data or completely useless drives.

This data retention deficiency of consumer SSDs makes them an unreliable medium for long-term data storage, especially for creative professionals and researchers. HDDs can suffer from bit rot, too, due to wear and tear, but they're still more resistant to power loss. If you haven't checked your archives in a while, I'd recommend doing so at the earliest.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 4, Informative) by remote_worker on Monday December 01, @04:36AM (2 children)

    by remote_worker (18199) Subscriber Badge on Monday December 01, @04:36AM (#1425469)

    I haven't tried ZFS on linux, but given that the licenses aren't compatible enough for ZFS to be in a distributed kernel I'd expect a few wrinkles with it. In particular, ZFS eats a lot of VM (as an ARC), and if the kernel isn't properly set up to handle that I would expect things to fall apart.

    Where ZFS shines is FreeBSD (where ZFS is integrated into the kernel) and ZFS on root. Upgrades are no-risk:
    1) make a zfs snapshot of your pre-upgrade system. Use bectl (bectl create ...) for the root (boot environment), and plain ZFS snapshots of any non-root filesystems that are going to be upgraded. This is fast. Leave the active boot environment alone, that's the one we want to upgrade.
    2) perform the upgrade, by whichever method turns your crank
    3) try the upgrade out.
    4) if the upgrade is good, delete the snapshots from (1) and get on with your day :).
    5) if the upgrade is bad, use the bootloader to boot the snapshot from (1), or if you can get to a shell in the upgrade, activate the root snapshot with "bectl activate ..." and reboot, then use plain ZFS to rollback to the snapshots from (1).
    6) Once you're back in the pre-upgrade system you can delete the snapshots from (1) or leave them while you try some other upgrade approach.

    One other thing that ZFS involves is that you should do a periodic "scrub" of every pool. I do it weekly, via crontab, at a time I'm pretty sure I'm going to be sleeping :). I expect monthly would be OK as well, but weekly is easier to fit into my timetable. The "zpool scrub ..." starts a background process that cleans up and checks a ZFS pool. My largest pool is a bit over 6Tb of spinning rust (mirrored drives), about 40% used, and scrub runs in 3.5 hours. The pool is perfectly usable while the scrub is running, but possibly a bit slower. A "zpool status ..." gives you the status of the scrub and lets you know if your drive is starting to show errors. If I paid more attention to the status I could probably have avoided the failures I talk about below :).

    I have my ZFS root on a small drive or SSD, but all my user data on a separate mirrored ZFS pool, and do daily backups using borgbackup of both the root pool and the user data pool to another machine. I tend to prefer spinning rust for the user data, both becasue it's still cheaper, and because it doesn't have the decay issues of SSDs, but SSDs are nicer for booting.

    I've had user data drives fail and root drives fail. I've never had ZFS fail. The failures in the mirrored pool were stressful because of the time it took to get replacement drives, but otherwise painless (not a single file lost, and the only downtime was the physical drive swap). I didn't lose any files in the root drive failures either, but things got slightly more complicated because a simple retsore from borgbackup to a new drive doesn't make the restored data bootable. Making things bootable again depends on how you like to set things up so I won't waste your time with details, but it isn't difficult.

    Really, if you can handle all the variations in the different Linux distributions, FreeBSD is easy. There may be more new stuff going from Linux to FreeBSD than from one Linux distro to another, but none of it is hard stuff, just different stuff :).

    Starting Score:    1  point
    Moderation   +3  
       Informative=3, Total=3
    Extra 'Informative' Modifier   0  

    Total Score:   4  
  • (Score: 2) by JoeMerchant on Wednesday December 03, @12:55AM (1 child)

    by JoeMerchant (3937) on Wednesday December 03, @12:55AM (#1425650)

    So, perhaps I didn't "scrub" my filesystem like I should have. I was doing an eval for a field deployment (thousands of machines in the field, nobody maintaining them) so, either I would have to setup scripts / cron jobs / whatever to do that for me, but it seems a bit absurd in this day and age that a filesystem that needs that kind of maintenance doesn't have something developed and ready to go in the default configurations.

    --
    🌻🌻🌻 [google.com]
    • (Score: 1) by remote_worker on Monday December 08, @04:00AM

      by remote_worker (18199) Subscriber Badge on Monday December 08, @04:00AM (#1426092)

      It could be needing a scrub, or it could be the the virtual memory needs got too big as the amount of data grew. A ZFS on linux developer might know, but I don't.

      However, you're right about the missing default, it does seem that there should be something for a default scrub setup. I didn't find out about scrub for several months after I started using ZFS. I didn't get a crash, or even a performance slowdown, but I might have if I'd gone longer.

      A big field deployment is not the place you'd want to try out a different OS as well as a different FS :). I was living with a dual-boot setup on my desktop until I felt comfortable, so one machine and no travel or remote access issues.