Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by martyb on Thursday March 14 2019, @11:58PM   Printer-friendly
from the a-bit-of-an-overstatement? dept.

The Reality of SSD Capacity: No-One Wants Over 16TB Per Drive

One of the expanding elements of the storage business is that the capacity per drive has been ever increasing. Spinning hard-disk drives are approaching 20 TB soon, while solid state storage can vary from 4TB to 16TB or even more, if you're willing to entertain an exotic implementation. Today at the Data Centre World conference in London, I was quite surprised to hear that due to managed risk, we're unlikely to see much demand for drives over 16TB.

Speaking with a few individuals at the show about expanding capacities, storage customers that need high density are starting to discuss maximum drive size requirements based on their implementation needs. One message starting to come through is that storage deployments are looking at managing risk with drive size – sure, a large capacity drive allows for high-density, but in a drive failure of a large drive means a lot of data is going to be lost.

[...] Ultimately the size of the drive and the failure rate leads to element of risks and downtime, and aside from engineering more reliant drives, the other variable for risk management is drive size. 16TB, based on the conversations I've had today, seems to be that inflection point; no-one wants to lose 16TB of data in one go, regardless of how often it is accessed, or how well a storage array has additional failover metrics.

Related: Toshiba Envisions a 100 TB QLC SSD in the "Near Future"
Samsung Announces a 128 TB SSD With QLC NAND


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Friday March 15 2019, @12:41AM (1 child)

    by Anonymous Coward on Friday March 15 2019, @12:41AM (#814564)

    you can drive the chance of data loss down exponentially if you can make your backups failing independent events.

    Some people prefer predictable failures rather than statistically random independent ones.

  • (Score: 0) by Anonymous Coward on Friday March 15 2019, @01:23AM

    by Anonymous Coward on Friday March 15 2019, @01:23AM (#814584)

    The alternative to making the failures of your backups as independent as possible isn't making the failures predictable, it's making sure your backups all fail at the same time. You trade off minimizing the risk of data loss in favour of maximizing the chance that either all your backups work or none of them do. Moving towards non-independence is just approximating having a single backup, which you can do for cheaper by just having a single backup without any increase in risk of data loss, because it literally couldn't get any worse than perfect correlation.