Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Friday October 26 2018, @08:55AM   Printer-friendly
from the you-probably-can't-afford-it dept.

Western Digital has announced a 15 TB hard drive, beating the current crop of 14 TB drives before the release of 16 TB drives by itself or others (Seagate had planned to release a 16 TB drive by the end of 2018). The drive uses shingled magnetic recording (SMR) and is helium-filled:

Western Digital notes that its new 15TB Ultrastar DC HC620 HDD is the industry's highest capacity hard drive, and the company is aiming it at those who want to pack the most storage into as small a space as possible. The Ultrastar DC HC620 uses shingled magnetic recording to increase density, and while Western Digital notes that SMR requires some extra work on the part of the end user, that's worth it when it comes to overall cost per terabyte and total cost of ownership.

[...] Release date is another unknown at this point, too. Western Digital says that it's currently shipping qualification samples to some of its enterprise customers and that the HDD will become widely available later this quarter, but that's as specific as the company got with today's announcement.

Also at The Verge.

Related: Western Digital Announces 12-14 TB Hard Drives and an 8 TB SSD
Seagate's 12 TB HDDs Are in Use, and 16 TB is Planned for 2018
Western Digital Shipping 14 TB Helium-Filled Shingled Magnetic Recording Hard Drives
Toshiba Announces its Own Helium-Filled 12-14 TB Hard Drives, with "Conventional Magnetic Recording"
Seagate Announces a 14 TB Helium-Filled PMR Hard Drive
Seagate Launches 14 TB Hard Drive for Desktop Users


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2) by esperto123 on Friday October 26 2018, @11:27AM (5 children)

    by esperto123 (4303) on Friday October 26 2018, @11:27AM (#754057)

    Can it be used in a RAID? didn't seagate recommend not using SMR drives in RAIDs as during rewrites of clusters the increased time to respond could cause the RAID to mark the drive as failed?

    if so, this would be a bad drive for data center, wouldn't it?

    • (Score: 2) by drussell on Friday October 26 2018, @12:08PM

      by drussell (2678) on Friday October 26 2018, @12:08PM (#754064) Journal

      It would have to be a RAID controller that supported host-based SMR since the drive itself doesn't know how to do the SMR itself on these models. Writing has to be specially assisted by the host device.

    • (Score: 2) by PocketSizeSUn on Saturday October 27 2018, @03:49AM (1 child)

      by PocketSizeSUn (5340) on Saturday October 27 2018, @03:49AM (#754357)

      If you put a DM drive in a RAID 5/6 configuration it would eventually fail the RAID due to a timeout when the drive got too busy to reply.
      In a mirror configuration you would possibly be okay, assuming a relatively light work load WRT writes and re-writes.

      A HM drive would simply not work unless your RAID controller was HM aware (I am not aware of any such controller).

      • (Score: 2) by Hyperturtle on Saturday October 27 2018, @12:46PM

        by Hyperturtle (2824) on Saturday October 27 2018, @12:46PM (#754434)

        This too. However, the timeouts sometimes can be tweaked depending on the BIOS or controller BIOS settings, rebuild rate, and tolerances. For typical home workstations and network appliances that provide storage--the likely outcome is eventual failure due to these reasons. I also am not aware of anyone taking the time to determine optimal tolerance ranges for such timings on SMR drives in various RAIDs. It's be a costly and time consuming initiative.

        Some nice priced disk controllers within consumer reach often have many features that are great--but do not necessarily have the capability or awareness of the SMR characteristics.

        RAID 1 is probably the safest bet.

    • (Score: 2) by Hyperturtle on Saturday October 27 2018, @12:42PM (1 child)

      by Hyperturtle (2824) on Saturday October 27 2018, @12:42PM (#754432)

      I would not advise using shingled drives (SMR) in a RAID--well, any raid besides a RAID 1 or RAID 0. Anything better is not going to perform well due to the technology involved in what makes SMR drives have a large capacity.

      Not only that, if you are a traditionalist like me and defragment both tiny and gargantuan (hundreds of gigabytes) files on mechanical drives, raid or otherwise--shingled drives are not so good at dealing with changing the positions of files.

      These drives are best used (at least--from a small business, consumer, guy like me that likes enterprise stuff in his home computers if he can swing it) as archival drives. The type you copy a disk image to, or point all your local backups to over USB, or put in the computer case as a drive you copy data you expect to read only once you've written it.

      Think of it as hot storage but not a workhorse. If you make a RAID1, its as redundant as any other RAID1 and has generally the same performance you'd expect out of a RAID1. If you make a RAID0, same thing--no redundancy, better speeds, probably not worth it with this type of drive considering the size and SMR.

      RAID 10 would probably seem fine at first, but as you filled the drives up... it just won't be a high performing as non-shingled drives. RAID 10s tend to get a lot of abuse after being set up--these drives are not intended for that kind of typical abuse. (Your use case may vary, of course--but I wouldn't recommend that regardless for SMR anything).

      RAID 5--especially the 'fake raid' or server based striping (like mdm created volumes in linux or whatever windows calls it--software raid) will READ REALLY GREAT and give you the most horridbly writes one can imagine after you get to seriously using it. Oh and the rebuild times if you dare reset the computer with the write cache enabled and it senses an arbitrary reason to rebuild--expect very long rebuild times.

      RAID 6--I won't even guess. I suspect it'll be redundant and work as designed and perform inadequately due to the SMR.

      Any of the other types -- RAID 50, RAID 60 and various custom solutions -- probably are out of our reach as consumers, but typically those topolgies will include ram buffering from a controller for the actual drive manipulation (like special host software to direct the utilization of such drives in a large SAN in someone's data center). From our perspective in what we may see if we aren't working at a datacenter, we'd more likely see solutions with these in a server that also leverages caching via optane memory or drives to provide the appearance of smooth write times, as well as large system RAM buffers dedicated to reading and writing from any combination of the drives from just a bunch of disks to RAID6s most likely.

      That said--you can hide a lot of poor performance behind Optane and RAM caching--to the extent it may be worth it if the writes the drives need to do can be written during dull moments and not impact the actual storage performance as a whole.

      For a home computer... well. I would think that SMR paired with an optane cache drive, or even a typical SSD or NVMe drive would hide the drawbacks of the drive if used with realistic expectations, but true performance is likely to be found with something not SMR. Using the same caching methods would increase the performance of non-SMR drives beyond that of the same cache on an SMR drive, since the actual writing of the cache to the disk won't take as long in general, but it might not be so important to do that for day-to-day stuff at home or on a home network with typical home network connectivity.

      I'm a purist though. I'd use SMR only for USB cradles to dump files to due to lacking the means to improve their performance to standards I'd prefer, or a disk-to-disk backup server/appliance where actual write speeds are not so critical at home. It's bound to disappoint in some way, I think, but if expectations don't include it being able to do everything a regular drive can, they certainly have their uses and are a value in that regard when purposed in light of the tradeoffs.

      • (Score: 2) by PocketSizeSUn on Saturday October 27 2018, @06:18PM

        by PocketSizeSUn (5340) on Saturday October 27 2018, @06:18PM (#754495)

        The raw write speed of SMR drives (serial writing) is faster than PMR by a slight margin.

        Unfortunately all of the RAID 5/6, 50/60 all have the same basic problem that can be glossed for by throwing more cache at it ... but the problem remains the same that after you add up all the NV-cache needed to make performance acceptable you will have been better off keeping smaller PMR drives.

        RAID 0/1/10 paired with a SMR aware file-system is workable, however F2FS is capped at 16T ...

  • (Score: 4, Informative) by drussell on Friday October 26 2018, @12:05PM (5 children)

    by drussell (2678) on Friday October 26 2018, @12:05PM (#754063) Journal

    Some information from Western Digital on Shingled Magnetic Recording:

    https://www.hgst.com/sites/default/files/resources/WP27-Shingled-Magnetic-Recording-HelioSeal-Technology.pdf [hgst.com]

    • (Score: 2, Interesting) by TheFool on Friday October 26 2018, @02:37PM (4 children)

      by TheFool (7105) on Friday October 26 2018, @02:37PM (#754099)

      I was in the industry when these when SMR wasn't public knowledge yet and they only existed in prototype form. It's kind of amusing to see they've finally given up on exclusively doing drive-managed SMR, because they were pretty hell bent on making that work because OEMs absolutely hate having extra drivers.

      Because of the shingled format of SMR, all data streams must be organized and written sequentially to the media. While the methods of SMR implementation may differ (see SMR Implementations section below), the data nonetheless must be written to the media sequentially. Consequently, should a particular track need to be modified or re-written, the entire “band” of tracks (zone) must be re-written. Because the modified data is potentially under another “shingle” of data, direct modification is not permitted, unlike traditional CMR drives. In the case of SMR, the entire row of shingles above the modified track needs to be rewritten in the process.

      I think this paragraph is enough for most of people here to appreciate why it's a really hard problem.

      Every write that overlaps an already written region turns into a read-modify-write. Full stop. So now your firmware is not only shuffling all this data around (and how do you do that efficiently on a small embedded system like a HDD?) but you still have to service the pesky user writes when you're trying to make space for them to write. At some point the worst case always became "hold up the user for a second or two while we get all this data moved", and yeah, that's not so great.

      • (Score: 2) by takyon on Friday October 26 2018, @05:34PM (2 children)

        by takyon (881) <{takyon} {at} {soylentnews.org}> on Friday October 26 2018, @05:34PM (#754158) Journal

        HAMR/MAMR should have been ready years before this point so that nobody has to bother with SMR to get an extra terabyte or two. Here's HAMR being promised for 2014/2016 [theregister.co.uk].

        As it stands today, SSDs can easily beat HDDs in capacity. Some company could probably make a 1,000 TB 3.5" SSD prototype by 2020, but good luck creating the HDD equivalent. If the $/TB gap shrinks, it could be a wrap for HDDs. Unfortunately, the cheapest SSDs will be using QLC NAND, but with enough layers and wear leveling it should work for the right use cases.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 1) by TheFool on Friday October 26 2018, @08:34PM (1 child)

          by TheFool (7105) on Friday October 26 2018, @08:34PM (#754216)

          HAMR kind of had the opposite problem the helium drives had. They didn't think they could get the He drives to work, so they kept it quiet - but everything went really smoothly in the end. HAMR was something they thought would work, and works just fine as a prototype, but once you get away from research side of things and into the real engineering (including process engineering) it's much more difficult to go from working prototype to product.

          SSDs have their own problems. I wouldn't be surprised to see the hypothetical 1000 TB SSD be like the HAMR drive - in theory it works, but in practice it just ends up running up against annoying engineering problems for an overly long time. Perfecting the logical->physical mapping table on a drive that big alone will take years.

          • (Score: 2) by takyon on Friday October 26 2018, @10:41PM

            by takyon (881) <{takyon} {at} {soylentnews.org}> on Friday October 26 2018, @10:41PM (#754283) Journal

            Nobody in the industry is talking about a 1 petabyte SSD AFAIK, it's just an example I've used repeatedly. But consider what's on the table:

            Samsung Announces a 128 TB SSD With QLC NAND [soylentnews.org]

            Toshiba Develops 512 GB and 1 TB Flash Chips Using TSV [soylentnews.org]

            It appears that Toshiba stacked 16x 512 Gb dies into a 1 TB package. Based on this story [theregister.co.uk], Samsung specifically wants to stack 32x 1 Tb dies into a 4 TB package, and then use 32 packages to put a total of 128 TB into an unspecified form factor, probably 2.5" based on what I saw in the comments (someone says you can cram 32 or even 64 packages into a 2.5" form factor, not sure if 64 is possible so I'll assume 32 as a hard limit).

            128 TB is about 1/8 of a petabyte. Let's see how far we can go.

            1 Tb is not state of the art:

            Western Digital Samples 96-Layer 3D QLC NAND with 1.33 Tb Per Die [soylentnews.org]

            So already, we can bring up our SSD size to about 170 terabytes. 128-layer NAND [soylentnews.org] is being developed, which could bring it up to 227 terabytes.

            If 2.5" can comfortably fit 32 packages, what can 3.5" fit? Heights of hard drives vary wildly [wikipedia.org], but going by the length and width alone, the 3.5" has double the area:

            (146 * 101.6) / (100 * 69.85) = 2.12363636364

            If you compare a 19mm height 3.5" drive to a 9.5mm height 2.5" drive, you're looking at a quadrupling of volume. Realistically though, you may be able to get 2-3 times more packages in there. So we end up at 454 TB to 681 TB.

            In conclusion, using technologies that should be available in the near-term, you can get pretty close to stuffing 1 petabyte into a 3.5" form factor. Considering the rapid pace of NAND improvements, I wouldn't be surprised to see a 2 Tb or larger die that would make it even easier to do this.

            Let's say you're right and engineering challenges delay the development of a 1 PB drive. Where will hard drives be at?

            Western Digital to Use Microwave Assisted Magnetic Recording to Produce 40 TB HDDs by 2025 [soylentnews.org]

            I could easily see 2 petabyte SSDs by the time we get a 40 TB hard drive, and 10 petabyte SSDs by the time we get a 100 TB hard drive. Assuming WD or Seagate continue to develop the technology to that point. Both companies and Toshiba manufacture SSDs now.

            With all that said, we can't stop there. SSDs aren't perfect, QLC NAND even less so, and 1 petabyte is not the end game. If someone comes up with multi-exabyte personal-sized storage using a holographic medium, it would get used.

            You want to give the NAND industry a real challenge? Tell them to produce 8 bits per cell (1 byte per cell) NAND. Without addressing endurance and retention issues, that may be practically impossible.
            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by PocketSizeSUn on Saturday October 27 2018, @03:41AM

        by PocketSizeSUn (5340) on Saturday October 27 2018, @03:41AM (#754355)

        It worked well enough but WD/HGST were not interested/capable of producing a Drive Managed drive, making Seagate a single source vendor and the actual market (cloud vendors). In that space the customers wouldn't rely on a single vendor solution. Finally some ex-WD people got into the program and pushed the remaining holdouts to Host Manged and the will to continue with a Drive Manged solution died off pretty quickly.

        The real complexity of dealing with SMR isn't really that terrible it is just not ideal without file-system support. Each of the Device Mapper solutions have their pain points.
        There are reports that f2fs can work with HM-SMR.
        It is also reasonable to think btrfs may be able to work. The ideal is a log structured file system (Like NILFS2) that can be tweaked to reclaim on zone boundaries.

        On linux we now have a 'dm-zoned' device mapper that allows you to treat a Host Managed drive as a 'normal' drive.

        I worked on an FTL-like block remapping scheme for SMR.
        The biggest pain point I had with ZDM was trying to make RAID 5 perform something close to acceptable ... but I never got decent performance out of it.
        Single drive performance was decent .. even fast on first fill. But maintaining an FTL-like lookup table with low over-provisioning ratios always (eventually) catches up with you.
        Sync heavy work loads (due to extra metadata syncing) are also performance killers and a common requirement in cloud configurations.

  • (Score: 3, Informative) by richtopia on Friday October 26 2018, @08:07PM (2 children)

    by richtopia (3160) on Friday October 26 2018, @08:07PM (#754206) Homepage Journal

    If you aren't tracking HDD technology, both companies are on the verge of a major transition. The drives in the summary are iterative improvements (and impressive, none the less), but in 2019 some interesting products should be available for consumers.

    HAMR has been the goal for all HDD manufacturers for years, and Seagate should be on the cusp of release. The Seagate Blog lists HAMR drives as being shipped to partners for testing and on the market next year: https://blog.seagate.com/craftsman-ship/hamr-next-leap-forward-now/ [seagate.com]

    MAMR is poised by WD as an interim technology that can be more rapidly deployed than HAMR. I'm having a harder time finding updates on MAMR progress, but in 2017 WD had scoped engineering samples for customer testing in 2018: http://innovation.wdc.com/game-changers/why-mamr.html [wdc.com]

    • (Score: 0) by Anonymous Coward on Friday October 26 2018, @09:46PM

      by Anonymous Coward on Friday October 26 2018, @09:46PM (#754251)

      Unfortunatly HAMR has been 'on the verge' since at least 2005. This is a 'believe it when they ship' moment.

      Flash looks to be more 'on the verge' than HD tech. Pretty much all of the big companies are shifting to it. The foundries are spinning up left and right. Meanwhile HD tech continues to consolidate. Prices for 16TB of SSD are currently in the 'yeah right think again because I have a special case where I can afford that' territory. I think by the time HD tech is at the same spot SSD will be nearing it or past it. Current predictions of SSD prices are 50% lower next year this time for the same amount of space. That puts SSD at 4TB drives in the same ballpark for price as 4TB HD.

      It is going to get real interesting in the next year or two.

    • (Score: 2) by takyon on Saturday October 27 2018, @08:36PM

      by takyon (881) <{takyon} {at} {soylentnews.org}> on Saturday October 27 2018, @08:36PM (#754514) Journal

      Initial HAMR/MAMR drives will be around 16-20 TB. Maybe we will see 40 TB by 2025, and 60-100 TB in the long term, when combined with other technologies.

      Call it a 2.85x increase in capacity in about 7 years (from 14 TB to 40 TB). I expect we will see much bigger increases on the NAND side during that time. This could have the effect of narrowing the $/TB gap between HDDs and SSDs.

      Western Digital provided a slide [anandtech.com] predicting that the $/TB gap between HDDs and SSDs will remain constant (i.e. SSDs will continue to cost about 10x per terabyte or more). This seems disingenuous, or wildly optimistic at best. But it's not too bad since Western Digital, Seagate, and Toshiba all sell SSDs now.

      A good selling point for HDDs could be the endurance. QLC endurance is not going to impress. But we've heard of ways that NAND endurance could be increased massively [tomshardware.com]. If something like that pans out, we could see that enable an increase to 8 bits per cell, use of smaller nodes, less overprovisioning, etc.

      If for some reason HAMR/MAMR are massively delayed (beyond current delays), maybe NAND + tape could replace HDDs?

      Finally, if we're looking at 2030 and beyond for non-SSD 100+ TB storage, maybe we could see an optical or holographic technology enter the scene.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
(1)