Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday September 13 2017, @06:18PM   Printer-friendly
from the defrag-with-windex dept.

Using a glass substrate instead of aluminum could allow 12 platters to be crammed into a 3.5" hard disk drive enclosure:

Even if many modern systems eschew classic hard drive storage designs in favor of solid state alternatives, there are still a number of companies working on improving the technology. One of those is Hoya, which is currently prototyping glass substrates for hard drive platters of the future which could enable the production of drives with as much as 20TB of storage space.

Hard drive platters are traditionally produced using aluminum substrates. While these substrates have enabled many modern advances in hard drive technology, glass substrates can be made with similar densities, but can be much thinner, leading to higher capacity storage drives. Hoya has already managed the creation of substrates as thin as 0.381mm, which is close to half the thickness of existing high-density drives.

In one cited example, an existing 12-terabyte drive from Western Digital was made up of eight platters. Hoya believes that by decreasing the thickness of the platters through its glass technology, it could fit as many as 12 inside a 3.5 inch hard drive casing. That would enable up to 18TB of storage space in a single drive (thanks Nikkei).

When that is blended with a technology known as "shingled magnetic recording," 20TB should be perfectly achievable.

Toshiba is reportedly planning to release a 14 TB helium-filled hard drive by the end of the year.

Also at Network World.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by TheRaven on Friday September 15 2017, @09:17AM (2 children)

    by TheRaven (270) on Friday September 15 2017, @09:17AM (#568345) Journal

    What bothers me about SSD is that all it could take is a malicious program (or just an unthinking sysadmin) eating up writes over a few months and your device lifetime just took a dump

    That's not really likely. Consider a device that has 1,000 rewrite cycles per cell (pretty low for modern devices). You have 1TB of space. If you assume perfect wear levelling (for a minute) then that gives you 1,000TB of writes. If you can do 300MB/s of writes, then it takes about six years of sustained writes at the drive's maximum write speed to wear out the cells. In practice, you can't do this, because once you've written 1TB (even deleting and using TRIM) the garbage collector will be running slower than the writes and this will cause back pressure on the interface. If your device is 2TB, then the lifetime with the same number of rewrites per cell doubles.

    Now, the assumption that wear levelling is perfect is wrong, but modern controllers give about 80-90% of the equivalent, so we're still talking a good 3-4 years of solid sustained writes for the cells to wear out and the lifetime scales almost linearly with the capacity - double the capacity and it will take twice as long to write each cell.

    It's not like the old SSDs that did little or no wear levelling, where writing a single block repeatedly could kill that block and you'd need the filesystem to work around that.

    It's also worth noting that the death is not guaranteed, it's probabilistic for each cell. Over a RAID-1 pair, it's actually quite unlikely that they'll die at exactly the same time, because that requires them to run out of spare cells to remap at the same time, which relies on a bunch of quantum effects happening at exactly the same rate for both drives. That can happen, but it's not nearly as likely as you suggest. There's also often some (deliberate) nondeterminism in the remapping, so the exact pattern of writes won't actually be the same, even in a RAID-1 pair.

    --
    sudo mod me up
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by edIII on Friday September 15 2017, @07:37PM (1 child)

    by edIII (791) on Friday September 15 2017, @07:37PM (#568654)

    I like your points, but we are not talking 1TB. The costs are still way to high. Try 64GB (most common in production), 128GB, or maybe 256GB (Although, I don't know of a single one in production). There were a huge number of writes going on, some logging left on verbose from development. Failure was not over a couple of months, and that was perhaps a bit exaggerated my worry about malicious use. I experienced failure within 18 months, but the drive had been in production for maybe a year before that. Since I wasn't the sysadmin that put any of them together, it never occurred to me to worry about the SSD and how many writes were occurring. I just said thank you and moved on to provisioning it further for services :)

    Working on some 1TB NVME's right now. You're correct, I'm less worried about those. Even more so since they are RAID-1. I did not know it was probabilistic. Thanks for pointing that out.

    It's also worth noting that the death is not guaranteed, it's probabilistic for each cell. Over a RAID-1 pair, it's actually quite unlikely that they'll die at exactly the same time.........

    Yeah, well like I said, the MTBF lottery winner right here :) Six enterprise expensive-ass SAS drives all failed simultaneously within 2.5 years of being put into production. Every. Single. Drive. Major surface damage according to Drive Savers. So... after that little experience I tend to view MTBF a bit more cynically.

    Thank you for your post. I do actually feel better about it.

    --
    Technically, lunchtime is at any moment. It's just a wave function.
    • (Score: 2) by TheRaven on Monday September 18 2017, @09:55AM

      by TheRaven (270) on Monday September 18 2017, @09:55AM (#569681) Journal

      I like your points, but we are not talking 1TB. The costs are still way to high. Try 64GB (most common in production), 128GB, or maybe 256GB (Although, I don't know of a single one in production).

      My 4-year-old laptop has a 1TB SSD and most of our build machines typically have 512GB SSDs that are used with ZFS as log and cache devices for RAID-1 disks (stuff rarely needs reading from the disks, because the SSDs are large enough for the working set). 64GB is a really odd place for the cost-benefit calculation to win. I'm not even sure where you'd buy them anymore. A quick look shows 128GB SSDs costing around £50, with 256GB costing around 50% more, 512GB around double that, and 1TB around 60% more than that, so 1TB comes pretty close to the sweet spot. That said, you don't buy SSDs at all if capacity is your bottleneck, you buy them if IOPS is your bottleneck and in that case the 1TB drives are very cheap in comparison to anything else on the market (and NVMe is even cheaper).

      --
      sudo mod me up