Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday September 13 2017, @06:18PM   Printer-friendly
from the defrag-with-windex dept.

Using a glass substrate instead of aluminum could allow 12 platters to be crammed into a 3.5" hard disk drive enclosure:

Even if many modern systems eschew classic hard drive storage designs in favor of solid state alternatives, there are still a number of companies working on improving the technology. One of those is Hoya, which is currently prototyping glass substrates for hard drive platters of the future which could enable the production of drives with as much as 20TB of storage space.

Hard drive platters are traditionally produced using aluminum substrates. While these substrates have enabled many modern advances in hard drive technology, glass substrates can be made with similar densities, but can be much thinner, leading to higher capacity storage drives. Hoya has already managed the creation of substrates as thin as 0.381mm, which is close to half the thickness of existing high-density drives.

In one cited example, an existing 12-terabyte drive from Western Digital was made up of eight platters. Hoya believes that by decreasing the thickness of the platters through its glass technology, it could fit as many as 12 inside a 3.5 inch hard drive casing. That would enable up to 18TB of storage space in a single drive (thanks Nikkei).

When that is blended with a technology known as "shingled magnetic recording," 20TB should be perfectly achievable.

Toshiba is reportedly planning to release a 14 TB helium-filled hard drive by the end of the year.

Also at Network World.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by Anonymous Coward on Wednesday September 13 2017, @06:38PM (16 children)

    by Anonymous Coward on Wednesday September 13 2017, @06:38PM (#567369)

    I guess officially it was the DeskStar or something like that.

    It had glass platters. It worked great, until suddenly it didn't. Opening a dead drive would reveal clear glass platters and lots of dust. It seems that cascading failure would cause the entire surface layer to come free from the platters. A bit of the surface comes free, and then more, and before long there is NO coating left at all. It was sudden and dramatic.

    Coatings don't stick well to glass.

    Starting Score:    0  points
    Moderation   +3  
       Interesting=3, Total=3
    Extra 'Interesting' Modifier   0  

    Total Score:   3  
  • (Score: 5, Interesting) by edIII on Wednesday September 13 2017, @06:51PM (13 children)

    by edIII (791) on Wednesday September 13 2017, @06:51PM (#567382)

    I now have the same experience with SSDs. Had a server crap out because there were no more writes left in the SSD. Completely exhausted the poor thing, and Linux didn't react well to the sudden read-only nature of its storage. Since most of the servers were provisioned at the same time.....

    Not fun.

    For servers I now prefer spinning disk since SSDs are really not an option anymore. At least for the OS. Virtualization can mitigate that with live migration and high availability, but that doesn't apply to embedded and bare metal systems. Looking into memory disk solutions to bootstrap the OS from spinning disk into memory and push data every so often to the spinning disk for backup. When spinning disks are not around anymore, I'll do it from SSD to memory instead. Either that or the buddy system (Raid 1).

    What I won't do is trust that an SSD can last forever in a server, or even as long as a spinning disk. That, and FFS, started monitoring the SMART statuses for how much life is left in them. Even Wile E Coyote and Daffy Duck had more graceful failures than SSDs do.

    We keep hearing about new tech to vastly increase the number of writes, and with NVME, you need it.

    --
    Technically, lunchtime is at any moment. It's just a wave function.
    • (Score: 3, Interesting) by Anonymous Coward on Wednesday September 13 2017, @11:38PM (1 child)

      by Anonymous Coward on Wednesday September 13 2017, @11:38PM (#567530)

      The good ones go read-only when they are spent. You can get your data off. (AFAIK, this is Intel's policy) Most SSDs are not good...

      The bad ones just lock up. At boot, the BIOS won't even see the drive.

      The really bad ones silently change your data.

      • (Score: 0) by Anonymous Coward on Thursday September 14 2017, @04:21AM

        by Anonymous Coward on Thursday September 14 2017, @04:21AM (#567635)

        The really bad ones silently change your data.

        I see you've heard of OCZ.

    • (Score: 2) by coolgopher on Thursday September 14 2017, @04:09AM (3 children)

      by coolgopher (1157) on Thursday September 14 2017, @04:09AM (#567632)

      With embedded you should be running your OS with a read-only root, and have carefully dimensioned your storage writes to last the intended life time of the device. And, you know, fail gracefully when you inevitably run out of writes ahead of time :)

      • (Score: 2) by edIII on Thursday September 14 2017, @05:24AM (2 children)

        by edIII (791) on Thursday September 14 2017, @05:24AM (#567653)

        Yes. Hindsight is 20/20. :)

        I'm doing so few writes to disk now that the lifetime is suitable.

        And, you know, fail gracefully when you inevitably run out of writes ahead of time :)

        How? I can make my code do that, but my impression was that there was a deeper problem in the operating system and a little corruption. Grace failure was handled by redundant devices, but when they go within hours of each other...

        I'm not an expert at the underlying system, so any suggestions are welcome. I also got the impression from the other poster that SSDs can fail in different ways, some of them I can't handle gracefully :)

        --
        Technically, lunchtime is at any moment. It's just a wave function.
        • (Score: 2) by coolgopher on Thursday September 14 2017, @06:13AM (1 child)

          by coolgopher (1157) on Thursday September 14 2017, @06:13AM (#567665)

          Barring the SSD going bonkers on you, if you mount your storage partition with the appropriate options to remount-readonly on error, you can have a watch for that happening and raise whatever type alarm is applicable. In the meanwhile, your other apps will get EPERM or some such when they're trying to write, and as long as they can handle that sanely, you should at least be able to get a message back to base to say "hey, this unit is just about dead, come fix me!".

          • (Score: 2) by edIII on Thursday September 14 2017, @08:53PM

            by edIII (791) on Thursday September 14 2017, @08:53PM (#568078)

            Thanks for the suggestions :)

            --
            Technically, lunchtime is at any moment. It's just a wave function.
    • (Score: 2) by TheRaven on Thursday September 14 2017, @07:43AM (6 children)

      by TheRaven (270) on Thursday September 14 2017, @07:43AM (#567680) Journal

      What I won't do is trust that an SSD can last forever in a server, or even as long as a spinning disk.

      It's been about five years since the average SSD lifetime passed the average spinning rust disk lifetime. Hard disks also fail catastrophically losing all of the data. Get a tiny bit of dust under the head and you can completely destroy all of the data on the disk in a few minutes as the head scrapes across the platter. There are a lot of other failure modes. If you've not encountered these in the wild then you're either responsible for a very small number of machines with hard drives or you're very, very lucky.

      --
      sudo mod me up
      • (Score: 2) by takyon on Thursday September 14 2017, @07:44AM (1 child)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday September 14 2017, @07:44AM (#567681) Journal

        Buy the new helium-filled hard drives. I'd like to see dust try and get in there.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by TheRaven on Thursday September 14 2017, @09:02AM

          by TheRaven (270) on Thursday September 14 2017, @09:02AM (#567702) Journal
          Dust typically doesn't get in, it's more often already there as a result of failures in the clean room process during assembly, and becomes dislodged during operation. This is just as possible with helium-filled drives. Being sufficiently enclosed that dust can't get in is orders of magnitude easier than being helium-tight. You can see how quickly helium leaks out of a balloon by trying to light a helium balloon with a match: the escaping helium through the skin extinguishes the match - don't try this with a hydrogen balloon! No one (except very small children, for whom it is a tragedy) cares that a helium balloon goes flat in a few days, but when a hard drive depends on helium and not air being present for a multi-year operating lifetime, that's a really narrow design tolerance for the enclosure. Narrow design tolerances translate to new and exciting failure modes.
          --
          sudo mod me up
      • (Score: 2) by edIII on Thursday September 14 2017, @08:52PM (3 children)

        by edIII (791) on Thursday September 14 2017, @08:52PM (#568076)

        Come now, you don't need to unduly denigrate me, and I've experienced quite a few other failure modes. SSD is a bit different. As for experience, I've had spinning drives operating 10 years or more before failure, and some enterprise expensive drives win the fucking lottery for MTBF and die early and spectacularly. One recovery engineer once described the surface of the hard disk as Apocalypse Now. So much for enterprise quality.

        Lifetime with SSD is pretty much irrelevant. It's all about disk writes, and that is the problem with SSD. With a hard drive it is only *possible* that it will fail within 5-10 years. With an SSD it is as certain as death and taxes that it will eventually die. In fact, it's much like a human in that there are only so many beats of the heart, only so many breaths.....

        What bothers me about SSD is that all it could take is a malicious program (or just an unthinking sysadmin) eating up writes over a few months and your device lifetime just took a dump. All of my SSDs are now monitored for SMART status and writes left. I use the standard of deviation to attempt to predict how much life is left in the drives. Looking forward to new generations of SSD that vastly increase the number of writes possible. At that point, I won't be as worried about creating a database server on one. It's worth noting that even with RAID 1 that both of the SSDs suffer from the malicious writes at the same time, and both will die within a short time period together.

        --
        Technically, lunchtime is at any moment. It's just a wave function.
        • (Score: 2) by TheRaven on Friday September 15 2017, @09:17AM (2 children)

          by TheRaven (270) on Friday September 15 2017, @09:17AM (#568345) Journal

          What bothers me about SSD is that all it could take is a malicious program (or just an unthinking sysadmin) eating up writes over a few months and your device lifetime just took a dump

          That's not really likely. Consider a device that has 1,000 rewrite cycles per cell (pretty low for modern devices). You have 1TB of space. If you assume perfect wear levelling (for a minute) then that gives you 1,000TB of writes. If you can do 300MB/s of writes, then it takes about six years of sustained writes at the drive's maximum write speed to wear out the cells. In practice, you can't do this, because once you've written 1TB (even deleting and using TRIM) the garbage collector will be running slower than the writes and this will cause back pressure on the interface. If your device is 2TB, then the lifetime with the same number of rewrites per cell doubles.

          Now, the assumption that wear levelling is perfect is wrong, but modern controllers give about 80-90% of the equivalent, so we're still talking a good 3-4 years of solid sustained writes for the cells to wear out and the lifetime scales almost linearly with the capacity - double the capacity and it will take twice as long to write each cell.

          It's not like the old SSDs that did little or no wear levelling, where writing a single block repeatedly could kill that block and you'd need the filesystem to work around that.

          It's also worth noting that the death is not guaranteed, it's probabilistic for each cell. Over a RAID-1 pair, it's actually quite unlikely that they'll die at exactly the same time, because that requires them to run out of spare cells to remap at the same time, which relies on a bunch of quantum effects happening at exactly the same rate for both drives. That can happen, but it's not nearly as likely as you suggest. There's also often some (deliberate) nondeterminism in the remapping, so the exact pattern of writes won't actually be the same, even in a RAID-1 pair.

          --
          sudo mod me up
          • (Score: 2) by edIII on Friday September 15 2017, @07:37PM (1 child)

            by edIII (791) on Friday September 15 2017, @07:37PM (#568654)

            I like your points, but we are not talking 1TB. The costs are still way to high. Try 64GB (most common in production), 128GB, or maybe 256GB (Although, I don't know of a single one in production). There were a huge number of writes going on, some logging left on verbose from development. Failure was not over a couple of months, and that was perhaps a bit exaggerated my worry about malicious use. I experienced failure within 18 months, but the drive had been in production for maybe a year before that. Since I wasn't the sysadmin that put any of them together, it never occurred to me to worry about the SSD and how many writes were occurring. I just said thank you and moved on to provisioning it further for services :)

            Working on some 1TB NVME's right now. You're correct, I'm less worried about those. Even more so since they are RAID-1. I did not know it was probabilistic. Thanks for pointing that out.

            It's also worth noting that the death is not guaranteed, it's probabilistic for each cell. Over a RAID-1 pair, it's actually quite unlikely that they'll die at exactly the same time.........

            Yeah, well like I said, the MTBF lottery winner right here :) Six enterprise expensive-ass SAS drives all failed simultaneously within 2.5 years of being put into production. Every. Single. Drive. Major surface damage according to Drive Savers. So... after that little experience I tend to view MTBF a bit more cynically.

            Thank you for your post. I do actually feel better about it.

            --
            Technically, lunchtime is at any moment. It's just a wave function.
            • (Score: 2) by TheRaven on Monday September 18 2017, @09:55AM

              by TheRaven (270) on Monday September 18 2017, @09:55AM (#569681) Journal

              I like your points, but we are not talking 1TB. The costs are still way to high. Try 64GB (most common in production), 128GB, or maybe 256GB (Although, I don't know of a single one in production).

              My 4-year-old laptop has a 1TB SSD and most of our build machines typically have 512GB SSDs that are used with ZFS as log and cache devices for RAID-1 disks (stuff rarely needs reading from the disks, because the SSDs are large enough for the working set). 64GB is a really odd place for the cost-benefit calculation to win. I'm not even sure where you'd buy them anymore. A quick look shows 128GB SSDs costing around £50, with 256GB costing around 50% more, 512GB around double that, and 1TB around 60% more than that, so 1TB comes pretty close to the sweet spot. That said, you don't buy SSDs at all if capacity is your bottleneck, you buy them if IOPS is your bottleneck and in that case the 1TB drives are very cheap in comparison to anything else on the market (and NVMe is even cheaper).

              --
              sudo mod me up
  • (Score: 2) by Reziac on Thursday September 14 2017, @02:10AM

    by Reziac (2489) on Thursday September 14 2017, @02:10AM (#567577) Homepage

    I hadn't heard about that particular fail (but I never bought IBM HDs so didn't pay close attention) but I knew someone who had one fail, opened it up, and found the platter broken in half. Not dropped or shocked -- this was just from normal operation in a desktop case.

    --
    And there is no Alkibiades to come back and save us from ourselves.
  • (Score: 2, Interesting) by nwf on Saturday September 16 2017, @01:23AM

    by nwf (1469) on Saturday September 16 2017, @01:23AM (#568785)

    HP used to make hard drives with glass platters. We never had one fail like that. I took apart dozens of them to sanitize the data, and none had any obvious problems. They seemed quite reliable. These were like 36 GB (maybe less, hard to recall.) We just dumped the last batch we had, in fact.