https://www.xda-developers.com/your-unpowered-ssd-is-slowly-losing-your-data/
SSDs have all but replaced hard drives when it comes to primary storage. They're orders of magnitude faster, more convenient, and consume less power than mechanical hard drives. That said, if you're also using SSDs for cold storage, expecting the drives lying in your drawer to work perfectly after years, you might want to rethink your strategy. Your reliable SSD could suffer from corrupted or lost data if left unpowered for extended periods. This is why many users don't consider SSDs a reliable long-term storage medium, and prefer using hard drives, magnetic tape, or M-Disc instead.
Unlike hard drives that magnetize spinning discs to store data, SSDs modify the electrical charge in NAND flash cells to represent 0 and 1. NAND flash retains data in underlying transistors even when power is removed, similar to other forms of non-volatile memory. However, the duration for which your SSD can retain data without power is the key here. Even the cheapest SSDs, say those with QLC NAND, can safely store data for about a year of being completely unpowered. More expensive TLC NAND can retain data for up to 3 years, while MLC and SLC NAND are good for 5 years and 10 years of unpowered storage, respectively.
The problem is that most consumer SSDs use only TLC or QLC NAND, so users who leave their SSDs unpowered for over a year are risking the integrity of their data. The reliability of QLC NAND has improved over the years, so you should probably consider 2–3 years of unpowered usage as the guardrails. Without power, the voltage stored in the NAND cells can be lost, either resulting in missing data or completely useless drives.
This data retention deficiency of consumer SSDs makes them an unreliable medium for long-term data storage, especially for creative professionals and researchers. HDDs can suffer from bit rot, too, due to wear and tear, but they're still more resistant to power loss. If you haven't checked your archives in a while, I'd recommend doing so at the earliest.
(Score: 2) by aafcac on Tuesday December 02, @04:11PM
Having messed around with former ZFS based disks that I had decommissioned, one of the things that I came to realize is just how fragile they can be if they get damaged. It hasn't been an issue for me, as I've been generally using the disks as part of a zmirror or raidz configuration and haven't lost data due to that, but there's weird stuff that can happen to the label that makes it a right pain to work with if you don't already know how to do it and are prepared.
That's really the biggest worry I have as it can be relatively hard to push systems like this hard enough to really establish that they're reliable without putting data on there that could be annoying to have to restore. I'm curious about Btrfs which does play a bit better with Linux.
Side rant: But really, the whole thing is the result of some rather poor decisions made decades ago without any real consideration of the consequences. In practice, the difference between GPL and something more permissive like the BSD or MIT license is pretty much non-existent if you don't have attorneys, and even if you do have attorneys, the difference is still rather small. Sure, the GPL theoretically prevents people from taking the code and distributing binaries without providing the source in the same place, but it also means that code can't be easily contributed the way that it would be with a more open license.
I do like modern Linux in a lot of ways, but there's an awful lot of boneheaded decisions being made lately like things involving having so many different ways that programs can be packaged/managed, SystemD everything, Wayland and things that seem to be more or less the same sort of nonsense that MS used to engage in because everybody is special and we need to change things because we need to change them. Never mind that it's absolutely absurd to have to reload system services to issue a mount -a after modifying the fstab or that 17+ years for the nonsense that is Wayland is way too much time when all the necessary bits of Linux to be a functional OS took a fraction of that time.