https://www.xda-developers.com/your-unpowered-ssd-is-slowly-losing-your-data/
SSDs have all but replaced hard drives when it comes to primary storage. They're orders of magnitude faster, more convenient, and consume less power than mechanical hard drives. That said, if you're also using SSDs for cold storage, expecting the drives lying in your drawer to work perfectly after years, you might want to rethink your strategy. Your reliable SSD could suffer from corrupted or lost data if left unpowered for extended periods. This is why many users don't consider SSDs a reliable long-term storage medium, and prefer using hard drives, magnetic tape, or M-Disc instead.
Unlike hard drives that magnetize spinning discs to store data, SSDs modify the electrical charge in NAND flash cells to represent 0 and 1. NAND flash retains data in underlying transistors even when power is removed, similar to other forms of non-volatile memory. However, the duration for which your SSD can retain data without power is the key here. Even the cheapest SSDs, say those with QLC NAND, can safely store data for about a year of being completely unpowered. More expensive TLC NAND can retain data for up to 3 years, while MLC and SLC NAND are good for 5 years and 10 years of unpowered storage, respectively.
The problem is that most consumer SSDs use only TLC or QLC NAND, so users who leave their SSDs unpowered for over a year are risking the integrity of their data. The reliability of QLC NAND has improved over the years, so you should probably consider 2–3 years of unpowered usage as the guardrails. Without power, the voltage stored in the NAND cells can be lost, either resulting in missing data or completely useless drives.
This data retention deficiency of consumer SSDs makes them an unreliable medium for long-term data storage, especially for creative professionals and researchers. HDDs can suffer from bit rot, too, due to wear and tear, but they're still more resistant to power loss. If you haven't checked your archives in a while, I'd recommend doing so at the earliest.
(Score: 5, Informative) by Unixnut on Monday December 01, @12:15PM (1 child)
Seconded on not trusting ZFS on Linux. I've used ZFS for years on FreeBSD and it has been a godsend for data integrity. I've not lost a byte of data in the last decade despite multiple power/system/drive array failures thanks to ZFS.
Having had such success I tried bringing ZFS to Linux servers a few years ago, and it really was not a good idea. For one thing you get no support for the configuration (from example RedHat, if you are trying this in an enterprise Linux environment), secondly it feels clunkly, like ZFS is not so much integrated into Linux as much as "bolted on" in places. This includes bypassing and hacking around the Linux VFS Layer (as ZFS is not just a "filesystem" but a complete vertically integrated storage subsystem).
As a result at best I found ZFS on Linux to be clunky and fragile, and at worst it breaks in some horrific way that makes it impossible to recover without data loss. Perhaps this is due to the licencing incompatibilities preventing proper kernel integration, or it is just something about the way the Linux kernel is designed that prevents it to be reliably integrated.
Either way, for serious/production work on Linux you are better off with MDADM/LVM and ext4. If you want ZFS and all the goodies it brings then best use an OS that has it properly integrated and supported in the kernel.
(Score: 2) by aafcac on Tuesday December 02, @04:11PM
Having messed around with former ZFS based disks that I had decommissioned, one of the things that I came to realize is just how fragile they can be if they get damaged. It hasn't been an issue for me, as I've been generally using the disks as part of a zmirror or raidz configuration and haven't lost data due to that, but there's weird stuff that can happen to the label that makes it a right pain to work with if you don't already know how to do it and are prepared.
That's really the biggest worry I have as it can be relatively hard to push systems like this hard enough to really establish that they're reliable without putting data on there that could be annoying to have to restore. I'm curious about Btrfs which does play a bit better with Linux.
Side rant: But really, the whole thing is the result of some rather poor decisions made decades ago without any real consideration of the consequences. In practice, the difference between GPL and something more permissive like the BSD or MIT license is pretty much non-existent if you don't have attorneys, and even if you do have attorneys, the difference is still rather small. Sure, the GPL theoretically prevents people from taking the code and distributing binaries without providing the source in the same place, but it also means that code can't be easily contributed the way that it would be with a more open license.
I do like modern Linux in a lot of ways, but there's an awful lot of boneheaded decisions being made lately like things involving having so many different ways that programs can be packaged/managed, SystemD everything, Wayland and things that seem to be more or less the same sort of nonsense that MS used to engage in because everybody is special and we need to change things because we need to change them. Never mind that it's absolutely absurd to have to reload system services to issue a mount -a after modifying the fstab or that 17+ years for the nonsense that is Wayland is way too much time when all the necessary bits of Linux to be a functional OS took a fraction of that time.