The Reality of SSD Capacity: No-One Wants Over 16TB Per Drive
One of the expanding elements of the storage business is that the capacity per drive has been ever increasing. Spinning hard-disk drives are approaching 20 TB soon, while solid state storage can vary from 4TB to 16TB or even more, if you're willing to entertain an exotic implementation. Today at the Data Centre World conference in London, I was quite surprised to hear that due to managed risk, we're unlikely to see much demand for drives over 16TB.
Speaking with a few individuals at the show about expanding capacities, storage customers that need high density are starting to discuss maximum drive size requirements based on their implementation needs. One message starting to come through is that storage deployments are looking at managing risk with drive size – sure, a large capacity drive allows for high-density, but in a drive failure of a large drive means a lot of data is going to be lost.
[...] Ultimately the size of the drive and the failure rate leads to element of risks and downtime, and aside from engineering more reliant drives, the other variable for risk management is drive size. 16TB, based on the conversations I've had today, seems to be that inflection point; no-one wants to lose 16TB of data in one go, regardless of how often it is accessed, or how well a storage array has additional failover metrics.
Related: Toshiba Envisions a 100 TB QLC SSD in the "Near Future"
Samsung Announces a 128 TB SSD With QLC NAND
(Score: 3, Insightful) by The Mighty Buzzard on Friday March 15 2019, @01:12AM (1 child)
Like I said, if it's important data that's only on one drive, you failed right from the start. RAID and two sets of backups should be the bare minimum for most anything.
My rights don't end where your fear begins.
(Score: 3, Interesting) by AthanasiusKircher on Friday March 15 2019, @02:03AM
Yes, and if you actually care about data integrity in RAID and in those backups, you probably want ECC RAM and a filesystem that can detect and correct random errors and bitrot, like ZFS for example.
Once we're talking about data the size of 16 TB, data degradation is likely to happen over time statistically, even due to random bit flips.