Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday September 13 2017, @06:18PM   Printer-friendly
from the defrag-with-windex dept.

Using a glass substrate instead of aluminum could allow 12 platters to be crammed into a 3.5" hard disk drive enclosure:

Even if many modern systems eschew classic hard drive storage designs in favor of solid state alternatives, there are still a number of companies working on improving the technology. One of those is Hoya, which is currently prototyping glass substrates for hard drive platters of the future which could enable the production of drives with as much as 20TB of storage space.

Hard drive platters are traditionally produced using aluminum substrates. While these substrates have enabled many modern advances in hard drive technology, glass substrates can be made with similar densities, but can be much thinner, leading to higher capacity storage drives. Hoya has already managed the creation of substrates as thin as 0.381mm, which is close to half the thickness of existing high-density drives.

In one cited example, an existing 12-terabyte drive from Western Digital was made up of eight platters. Hoya believes that by decreasing the thickness of the platters through its glass technology, it could fit as many as 12 inside a 3.5 inch hard drive casing. That would enable up to 18TB of storage space in a single drive (thanks Nikkei).

When that is blended with a technology known as "shingled magnetic recording," 20TB should be perfectly achievable.

Toshiba is reportedly planning to release a 14 TB helium-filled hard drive by the end of the year.

Also at Network World.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by TheRaven on Thursday September 14 2017, @07:43AM (6 children)

    by TheRaven (270) on Thursday September 14 2017, @07:43AM (#567680) Journal

    What I won't do is trust that an SSD can last forever in a server, or even as long as a spinning disk.

    It's been about five years since the average SSD lifetime passed the average spinning rust disk lifetime. Hard disks also fail catastrophically losing all of the data. Get a tiny bit of dust under the head and you can completely destroy all of the data on the disk in a few minutes as the head scrapes across the platter. There are a lot of other failure modes. If you've not encountered these in the wild then you're either responsible for a very small number of machines with hard drives or you're very, very lucky.

    --
    sudo mod me up
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by takyon on Thursday September 14 2017, @07:44AM (1 child)

    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday September 14 2017, @07:44AM (#567681) Journal

    Buy the new helium-filled hard drives. I'd like to see dust try and get in there.

    --
    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2) by TheRaven on Thursday September 14 2017, @09:02AM

      by TheRaven (270) on Thursday September 14 2017, @09:02AM (#567702) Journal
      Dust typically doesn't get in, it's more often already there as a result of failures in the clean room process during assembly, and becomes dislodged during operation. This is just as possible with helium-filled drives. Being sufficiently enclosed that dust can't get in is orders of magnitude easier than being helium-tight. You can see how quickly helium leaks out of a balloon by trying to light a helium balloon with a match: the escaping helium through the skin extinguishes the match - don't try this with a hydrogen balloon! No one (except very small children, for whom it is a tragedy) cares that a helium balloon goes flat in a few days, but when a hard drive depends on helium and not air being present for a multi-year operating lifetime, that's a really narrow design tolerance for the enclosure. Narrow design tolerances translate to new and exciting failure modes.
      --
      sudo mod me up
  • (Score: 2) by edIII on Thursday September 14 2017, @08:52PM (3 children)

    by edIII (791) on Thursday September 14 2017, @08:52PM (#568076)

    Come now, you don't need to unduly denigrate me, and I've experienced quite a few other failure modes. SSD is a bit different. As for experience, I've had spinning drives operating 10 years or more before failure, and some enterprise expensive drives win the fucking lottery for MTBF and die early and spectacularly. One recovery engineer once described the surface of the hard disk as Apocalypse Now. So much for enterprise quality.

    Lifetime with SSD is pretty much irrelevant. It's all about disk writes, and that is the problem with SSD. With a hard drive it is only *possible* that it will fail within 5-10 years. With an SSD it is as certain as death and taxes that it will eventually die. In fact, it's much like a human in that there are only so many beats of the heart, only so many breaths.....

    What bothers me about SSD is that all it could take is a malicious program (or just an unthinking sysadmin) eating up writes over a few months and your device lifetime just took a dump. All of my SSDs are now monitored for SMART status and writes left. I use the standard of deviation to attempt to predict how much life is left in the drives. Looking forward to new generations of SSD that vastly increase the number of writes possible. At that point, I won't be as worried about creating a database server on one. It's worth noting that even with RAID 1 that both of the SSDs suffer from the malicious writes at the same time, and both will die within a short time period together.

    --
    Technically, lunchtime is at any moment. It's just a wave function.
    • (Score: 2) by TheRaven on Friday September 15 2017, @09:17AM (2 children)

      by TheRaven (270) on Friday September 15 2017, @09:17AM (#568345) Journal

      What bothers me about SSD is that all it could take is a malicious program (or just an unthinking sysadmin) eating up writes over a few months and your device lifetime just took a dump

      That's not really likely. Consider a device that has 1,000 rewrite cycles per cell (pretty low for modern devices). You have 1TB of space. If you assume perfect wear levelling (for a minute) then that gives you 1,000TB of writes. If you can do 300MB/s of writes, then it takes about six years of sustained writes at the drive's maximum write speed to wear out the cells. In practice, you can't do this, because once you've written 1TB (even deleting and using TRIM) the garbage collector will be running slower than the writes and this will cause back pressure on the interface. If your device is 2TB, then the lifetime with the same number of rewrites per cell doubles.

      Now, the assumption that wear levelling is perfect is wrong, but modern controllers give about 80-90% of the equivalent, so we're still talking a good 3-4 years of solid sustained writes for the cells to wear out and the lifetime scales almost linearly with the capacity - double the capacity and it will take twice as long to write each cell.

      It's not like the old SSDs that did little or no wear levelling, where writing a single block repeatedly could kill that block and you'd need the filesystem to work around that.

      It's also worth noting that the death is not guaranteed, it's probabilistic for each cell. Over a RAID-1 pair, it's actually quite unlikely that they'll die at exactly the same time, because that requires them to run out of spare cells to remap at the same time, which relies on a bunch of quantum effects happening at exactly the same rate for both drives. That can happen, but it's not nearly as likely as you suggest. There's also often some (deliberate) nondeterminism in the remapping, so the exact pattern of writes won't actually be the same, even in a RAID-1 pair.

      --
      sudo mod me up
      • (Score: 2) by edIII on Friday September 15 2017, @07:37PM (1 child)

        by edIII (791) on Friday September 15 2017, @07:37PM (#568654)

        I like your points, but we are not talking 1TB. The costs are still way to high. Try 64GB (most common in production), 128GB, or maybe 256GB (Although, I don't know of a single one in production). There were a huge number of writes going on, some logging left on verbose from development. Failure was not over a couple of months, and that was perhaps a bit exaggerated my worry about malicious use. I experienced failure within 18 months, but the drive had been in production for maybe a year before that. Since I wasn't the sysadmin that put any of them together, it never occurred to me to worry about the SSD and how many writes were occurring. I just said thank you and moved on to provisioning it further for services :)

        Working on some 1TB NVME's right now. You're correct, I'm less worried about those. Even more so since they are RAID-1. I did not know it was probabilistic. Thanks for pointing that out.

        It's also worth noting that the death is not guaranteed, it's probabilistic for each cell. Over a RAID-1 pair, it's actually quite unlikely that they'll die at exactly the same time.........

        Yeah, well like I said, the MTBF lottery winner right here :) Six enterprise expensive-ass SAS drives all failed simultaneously within 2.5 years of being put into production. Every. Single. Drive. Major surface damage according to Drive Savers. So... after that little experience I tend to view MTBF a bit more cynically.

        Thank you for your post. I do actually feel better about it.

        --
        Technically, lunchtime is at any moment. It's just a wave function.
        • (Score: 2) by TheRaven on Monday September 18 2017, @09:55AM

          by TheRaven (270) on Monday September 18 2017, @09:55AM (#569681) Journal

          I like your points, but we are not talking 1TB. The costs are still way to high. Try 64GB (most common in production), 128GB, or maybe 256GB (Although, I don't know of a single one in production).

          My 4-year-old laptop has a 1TB SSD and most of our build machines typically have 512GB SSDs that are used with ZFS as log and cache devices for RAID-1 disks (stuff rarely needs reading from the disks, because the SSDs are large enough for the working set). 64GB is a really odd place for the cost-benefit calculation to win. I'm not even sure where you'd buy them anymore. A quick look shows 128GB SSDs costing around £50, with 256GB costing around 50% more, 512GB around double that, and 1TB around 60% more than that, so 1TB comes pretty close to the sweet spot. That said, you don't buy SSDs at all if capacity is your bottleneck, you buy them if IOPS is your bottleneck and in that case the 1TB drives are very cheap in comparison to anything else on the market (and NVMe is even cheaper).

          --
          sudo mod me up