Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday October 26 2016, @10:23AM   Printer-friendly
from the now-you-CAN-take-it-with-you? dept.

Seagate has launched the world's first 5 TB 2.5" hard disk drives (HDDs). However, they won't fit in most laptops:

The new Seagate BarraCuda 2.5" drives resemble the company's Mobile HDDs introduced earlier this year and use a similar set of technologies: motors with 5400 RPM spindle speed, platters based on [shingled magnetic recording (SMR)] technology with over 1300 Gb/in2 areal density, and multi-tier caching. The 3 TB, 4 TB and 5 TB BarraCuda 2.5" HDDs that come with a 15 mm z-height are designed for external storage solutions because virtually no laptop can accommodate drives of that thickness. Meanwhile, the 7 mm z-height drives (500 GB, 1 TB and 2 TB) are aimed at mainstream laptops and SFF desktops that need a lot of storage space.

Seagate has also launched a 2 TB shingled solid-state hybrid drive (SSHD) with 8 GB of NAND cache and a 128 MB DRAM cache buffer. The 1 TB and 500 GB versions also have 8 GB of NAND and 128 MB of DRAM. These are the first hybrid drives to use shingled magnetic recording.

Seagate press release (for "mobile warriors" only).


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Unixnut on Wednesday October 26 2016, @01:20PM

    by Unixnut (5779) on Wednesday October 26 2016, @01:20PM (#418959)

    ZFS is not a backup solution. Don't get me wrong, ZFS is awesome. I am running it for any storage server I build with FreeBSD. My current SOHO setup is a 12TB array, 2x (2x3TB raidz2 3.5 drives) zvols, and my previous was 2 x ( 4x1TB raidz1 2.5 drives). Both with 128GB SSD backed read/write cache, so if your data set fits in 128GB, you are flying at 300+ MB/s, on cheap "consumer" hardware. All my VMs on the server can max out on IO, and I can still max out my home gigabit Ethernet with I/O and not notice a slowdown. Most impressive.

    However none of that will save you if the entire array goes offline. My previous ZFS array was toasted when a power surge blew out 5 out of the 8 drives (and the raid card, and the motherboard). No amount of ZFS block level redundancy will recover that data, but thankfully I had the backup on the single 6TB 3.5 at the time.

    That is why I always have backups now on a single disk, and I look forward to future 12TB 3.5 disks so I can fit a backup of the entire array onto it. Don't mistake redundant storage systems for backups, lest you end up regretting it when the unthinkable happens.

    Also, I went with the same size disks, but across 4 brands. So for example, my 8 x 2.5 array was 2 x Toshiba, 2 x HGST, 2x Samsung and 2 x WD, one of each brand in each raidz1 zvol. That way if there was a bad production batch, it would not affect more than 1 drive per volume, which the raidz1 would protect against.

    Also, you can't add drives to existing zvols. So if you have one (3x1TB raidz1) zvol, you would have to create another zvol and add it to the zpool. It isn't like Linux software raid, where you can add drives, or increase drive capacity, and then grow the array. One of the limitations of ZFS.

    Also, in addition to block level hash checksumming, you can instruct zfs to keep multiple copies of your data. So if you set it to "2", each file will use double the space (because there are two copies), but double the redundancy, on top of whatever raidZ$X level you are using.

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by Scruffy Beard 2 on Wednesday October 26 2016, @03:31PM

    by Scruffy Beard 2 (6030) on Wednesday October 26 2016, @03:31PM (#419020)

    Your comments, while informative, don't invalidate my concept.

    You are simply restating the classic "RAID is not a backup solution".

    The problem with using a single disk as a back-up solution is that hard-disks will not always return correct data.* Modern drives are only rated to 10^14 bits without non-recoverable errors [seagate.com]. The works out to 125TB of data read. *(I was off by 3 orders of magnitude in saying that modern drives can't even read their entire capacity.

    With my scheme, I will have to store about 5 copies of my important data. 3 in the off-site back-up server, and (optionally) 2 copies in my workstation.

    Essentially, I treat the back-up server as a more capable hard-drive for back-up purposes. The beauty of it is that I don't even need to back up that server's encryption keys. If the server experiences a catastrophic failure, all of the data stored there is gone anyway (so the keys would be useless). A second backup server in a third location is an option for the truly paranoid.

  • (Score: 2) by TheRaven on Wednesday October 26 2016, @03:59PM

    by TheRaven (270) on Wednesday October 26 2016, @03:59PM (#419029) Journal

    ZFS is not a backup solution

    That's half true. ZFS on a single machine is not a backup solution. ZFS on a NAS can be a backup solution for your other machines. ZFS on a pair of machines where you use zfs send to send periodic updates to the other can be a backup solution.

    Also, you can't add drives to existing zvols.

    I think you're misusing some terminology here. A zvol is a thing that looks like a block device, but is backed by ZFS rather than being directly backed by some physical storage. You can't expand them beyond their initial logical size, but they do support overcommit so you can create ones that are larger than the existing physical storage. A ZFS storage pool is made of vdevs, which are either individual disks (or files on a disk, or some other block device, such as a GELI device or other GEOM provider on FreeBSD), mirror or RAID-Z sets. You can't add new disks to a vdev after you've created it, but you can add new vdevs to a pool. For example, if you have three 2TB disks in a single-redundancy RAID-Z vdevs, then you can upgrade your pool by adding a new vdev that contains three 4TB disks in a RAID-Z vdev. You can alternatively replace each disk in the RAID-Z vdev with a larger one in turn and resilver the vdev. Once they're all replaced, you can increase the size of the vdev. Often the simplest way of migrating is to add a new pool and use zfs send | zfs receive on each filesystem in turn to move them onto the new disk. You can do this with snapshots so you copy most things, then unmount, copy the recent changes, then remount, so you only get a little bit of downtime.

    --
    sudo mod me up
    • (Score: 2) by Unixnut on Wednesday October 26 2016, @05:37PM

      by Unixnut (5779) on Wednesday October 26 2016, @05:37PM (#419067)

      Yes, apologies, I meant vdev's. Been a long few days for me :-)

      Agree with you otherwise on all your points. One thing though:

      "You can alternatively replace each disk in the RAID-Z vdev with a larger one in turn and resilver the vdev. Once they're all replaced, you can increase the size of the vdev"

      How would you increase the size of the vdev? I was under the impression the above is not possible with ZFS. Would be interesting to have a read up on it.

      • (Score: 2) by TheRaven on Wednesday October 26 2016, @06:29PM

        by TheRaven (270) on Wednesday October 26 2016, @06:29PM (#419083) Journal

        How would you increase the size of the vdev? I was under the impression the above is not possible with ZFS. Would be interesting to have a read up on it.

        I've not done this, but I believe it happens automatically. With RAID-Z, the size of the vdev is defined by the size of the smallest disk. If you replace all of the disks with larger ones, then I think that it increases automatically. The caveat with this is that you must replace each disk one at a time and wait for a resilver to occur. You get reduced (i.e. no) redundancy with this while the resilver is running (which can take a couple of days per disk) and you are really hammering the disks while you do it.

        When I upgraded my NAS, I decided I wanted to turn on dedup for a lot more stuff, move to lz4 compression and change the hash function that I was using, so it was easier to just pop the old disks in a spare machine, do a fresh install on the new disks, and then zfs send | zfs receive the data over a GigE network cable. It took a couple of hours to get the core stuff there and a couple of days to finish it all off, but that was mostly because it's a comparatively slow machine (I mostly access it over WiFi, so performance isn't really an issue - the WiFi is the bottleneck) and I was deduplicating a bunch of data. Somewhat ironically, after doing the recompression and the deduplication, my space usage was down enough that everything fitted quite comfortably onto the old disks.

        --
        sudo mod me up