Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Saturday July 18 2020, @04:58AM   Printer-friendly

Western Digital releases new 18TB, 20TB EAMR drives:

Earlier this month, Western Digital announced retail availability of its Gold 16TB and 18TB CMR drives, as well as an upcoming 20TB Ultrastar SMR drive. These nine-platter disks are the largest individual hard drives widely available today.

Earlier this year, rival drive vendor Seagate promised to deliver 18TB and 20TB drives in 2020, but they have not yet materialized in retail channels.

Seagate's largest drives, like Western Digital's, needed a new technology to overcome the Magnetic Recording Trilemma—but Western Digital's EAMR (Energy Assisted Magnetic Recording) is considerably less-exotic than the HAMR (Heat Assisted Magnetic Recording) used by Seagate. That more conservative approach likely helped Western Digital beat its rival to market.

The maximum usable data density on a magnetic recording device is limited by three competing factors. Magnetic coercivity—the strength of magnetic field required to demagnetize a domain—must be high enough to prevent the separately recorded grains from influencing one another and corrupting data. The field strength of the write head must be high enough to overcome the coercivity of the medium. Finally, the size of the field generated by the write head must be small enough so as not to overwrite adjacent areas.

[...] Although Western Digital is continuing its research into MAMR technology, the tech used in this month's new drives—EAMR, or Energy Assisted Magnetic Recording—is considerably less exotic. Rather than alter the magnetic properties of the medium with microwave or laser emissions, EAMR simply stabilizes the write field more rapidly and accurately, by using a bias current on the main pole of the write head as well as the current on the voice coils.

The potential data loss from drive failure grows ever larger...


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Saturday July 18 2020, @05:35PM (6 children)

    by Anonymous Coward on Saturday July 18 2020, @05:35PM (#1023427)

    "The potential data loss from drive failure grows ever larger..."

    Buy a pair, and use RAID-1.

    If you also want a backup, buy three, keep two in operation, periodically replace one with the third as a backup.

  • (Score: 1, Funny) by Anonymous Coward on Saturday July 18 2020, @09:50PM (1 child)

    by Anonymous Coward on Saturday July 18 2020, @09:50PM (#1023513)

    It's easier just to wait for drive failure and then replace all your valuable data with one command...

    #cp /internet/*teens*.mkv /home/mydata/pr0n/

  • (Score: 2) by TheRaven on Sunday July 19 2020, @03:04PM (3 children)

    by TheRaven (270) on Sunday July 19 2020, @03:04PM (#1023739) Journal
    The problem is the resilvering time. If a disk dies in a RAID array, you need to fail over to a replacement, copying all of the data onto it. During that time, you are placing additional load on the remaining disks and you have less (RAID-6) or no (RAID-1 or RAID-5) redundancy. Disk speeds are not scaling with capacities, so the bigger disks mean longer with the array in the degraded state.
    --
    sudo mod me up
    • (Score: 2) by sjames on Monday July 20 2020, @07:38PM (2 children)

      by sjames (2882) on Monday July 20 2020, @07:38PM (#1024228) Journal

      That's one reason I've been looking at BTRFS with multiple disks. For example, given disks A,B, and C, to replace disk B, half of A gets copied and half of C. More disks spread the reading further.

      Just don't configure RAID 5/6 in btrfs, that leads to tears.

      • (Score: 2) by TheRaven on Wednesday July 22 2020, @10:42AM (1 child)

        by TheRaven (270) on Wednesday July 22 2020, @10:42AM (#1024900) Journal

        So btrfs uses block-level mirroring? To handle any single-disk failure, that would sacrifice half of your capacity. If any given block has copies on either (A, B), (B, C) or (A, C) then all blocks take up double the space that a single copy would take. In a two-disk setting, this is equivalent to RAID-1. In a three-disk setting, it gives the same level of redundancy as RAID-5 (or though closer to RAID-Z): it can recover from any single disk failing. The cost is significantly higher in terms of storage overhead. RAID-5 or RAID-Z require one disk's extra space to handle single-disk failure whereas this approach would require half of the total pool size. The only benefit is reduced resilvering load (though given that most disks can read faster than they can write, I doubt it would make a performance difference: your pool is still degraded until you've written the entire replacement disk). To handle three-disk failure with this scheme, you'd need to use 2/3 of your total pool capacity for redundant storage, whereas with RAID-6 or RAID-Z2 you need to pay only two disks (so a 6-disk array with this scheme would have 2 disks useable capacity, RAID-Z2 or RAID-6 would have 4 disks useable capacity).

        If this is really what btrfs is doing, I'm very surprised. From what I've seen, it does have a mode that is analogous to ZFS's 'copies=2' mode, but that is not intended to protect you against complete disk failure.

        --
        sudo mod me up
        • (Score: 2) by sjames on Wednesday July 22 2020, @12:13PM

          by sjames (2882) on Wednesday July 22 2020, @12:13PM (#1024913) Journal

          BTRFS is doing the equivalent of RAID10. There are a great many conventional RAID10 arrays out there. One additional bit, in the case that a disk becomes unreliable but not totally dead, it's checksumming can determine which bits are still good.

          Also since it is at the file level, the disks don't need to be of the same size or added in pairs.

          Minor math correction, in the 6 disk array, it's capacity of 3 vs. capacity of 4.

          Also, with RAID 6, you can lose 2 disks, if you lose a 3rd, you lose the array.

          But given the way that drive capacity is growing, it's also nice that a btrfs system cab grow organically without leaving space unused. So, for example, you have a pair of 2TB drives. You can later add a single 4TB drive for a total of 3 disks and 4TB fully redundant storage.

          BTRFS does have higher raid modes, but based on my testing, I wouldn't go anywhere near such a configuration until the bugs are out.