Ars Technica is reporting that Samsung unveils 2.5-inch 16TB SSD: The world’s largest hard drive. [arstechnica.com] The third-generation 3D V-NAND is now up to 48 TLC layers and 256Gbit per die. From the article:
At the Flash Memory Summit in California, Samsung has unveiled what appears to be the world's largest hard drive—and somewhat surprisingly, it uses NAND flash chips rather than spinning platters. The rather boringly named PM1633a, which is being targeted at the enterprise market, manages to cram almost 16 terabytes into a 2.5-inch SSD package. By comparison, the largest conventional hard drives made by Seagate and Western Digital currently max out at 8 or 10TB.
The secret sauce behind Samsung's 16TB SSD is the company's new 256Gbit (32GB) NAND flash die [samsungtomorrow.com]; twice the capacity of 128Gbit NAND dies that were commercialised by various chip makers last year. To reach such an astonishing density, Samsung has managed to cram 48 layers of 3-bits-per-cell (TLC) 3D V-NAND into a single die. This is up from 24 layers in 2013, and then 36 layers in 2014.
Though claimed capacity is 16 TB, actual available storage is 15.36 TB (providing 640 GB of over provisioning.) The drive is 15mm high so it is geared to the enterprise market; it probably won't fit in your laptop where 9.5mm is an unofficial standard. [wikipedia.org]
In case you were wondering, by some estimates this capacity is enough to store 1.5 copies of the uncompressed textual data in the print collection of the US Library of Congress [wikipedia.org] (LoC).
It boggles my mind to consider such large storage capacities. Given the global population is about 8.3 billion, just one of these drives would be sufficient to store 1.8 KiB on every human being on the planet, never mind an entire rack of these drives.
What practical use is there for such capacities? What would you do with one (or more) of these? How would this fit into your "Big Data" application?