Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Wednesday October 26 2016, @10:23AM   Printer-friendly
from the now-you-CAN-take-it-with-you? dept.

Seagate has launched the world's first 5 TB 2.5" hard disk drives (HDDs). However, they won't fit in most laptops:

The new Seagate BarraCuda 2.5" drives resemble the company's Mobile HDDs introduced earlier this year and use a similar set of technologies: motors with 5400 RPM spindle speed, platters based on [shingled magnetic recording (SMR)] technology with over 1300 Gb/in2 areal density, and multi-tier caching. The 3 TB, 4 TB and 5 TB BarraCuda 2.5" HDDs that come with a 15 mm z-height are designed for external storage solutions because virtually no laptop can accommodate drives of that thickness. Meanwhile, the 7 mm z-height drives (500 GB, 1 TB and 2 TB) are aimed at mainstream laptops and SFF desktops that need a lot of storage space.

Seagate has also launched a 2 TB shingled solid-state hybrid drive (SSHD) with 8 GB of NAND cache and a 128 MB DRAM cache buffer. The 1 TB and 500 GB versions also have 8 GB of NAND and 128 MB of DRAM. These are the first hybrid drives to use shingled magnetic recording.

Seagate press release (for "mobile warriors" only).


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by RedBear on Wednesday October 26 2016, @11:00AM

    by RedBear (1734) on Wednesday October 26 2016, @11:00AM (#418921)

    Seems like they've hit a pretty hard barrier in data storage density for laptop drives. I used to be able to significantly upgrade my notebooks' storage every couple of years, but for at least 3 or 4 years or so the largest capacity that will fit in even my 17" MacBook Pro (which handles up to 12.5mm drives) is just 2TB. I check every few months and sometimes stumble across a 3TB 2.5" drive but then realize it's 15mm and won't fit. There aren't even 2.5TB drives in the meantime. It's like they've just completely hit a wall, at least in the 2.5" form factor. It won't be too much longer before there are affordable SSDs in larger capacities than any sub-12.5mm laptop hard drive on the market. Other World Computing has a 2TB SSD now for $650; that was the price of the 1TB SSD until just a few months ago, then it suddenly dropped by half to $330. I'd expect the 2TB to follow suit within a couple of years.

    I wonder if regular laptop spinning drives are really at a dead end at this point. Not that that's a bad thing, necessarily.

    --
    ¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
    ... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ
    • (Score: 2, Interesting) by WillR on Wednesday October 26 2016, @02:25PM

      by WillR (2012) on Wednesday October 26 2016, @02:25PM (#418992)
      My totally unscientific guess is it's a combination of there being a technical barrier, and market research saying most people who are willing to spend money on better storage want SSD speed, so they wouldn't turn a profit on an investment into pushing spinning hard drive density forward.
    • (Score: 1) by Francis on Wednesday October 26 2016, @02:40PM

      by Francis (5544) on Wednesday October 26 2016, @02:40PM (#418997)

      I'm not sure that it's practical to have SSDs that size. People on laptops are probably not working with that amount of data and people who are working with that kind of data tend to get pissy when the disk goes bad taking all the data with them. Not to mention that they'll usually be using a big tower and can have RAID arrays and the like to handle both the issue capacity as well as redundancy.

      So, they're market is most likely people with huge amounts of money and the need for faster data access. I'm guessing that they'll eventually get larger, but I wouldn't expect them to get multiple terabytes any time soon as the market isn't really there and might never get there. There's not much point in larger capacity if the largest things people generally store are hidef video and that's not likely to get to 4k for typical users.

      • (Score: 2) by bob_super on Wednesday October 26 2016, @04:54PM

        by bob_super (1357) on Wednesday October 26 2016, @04:54PM (#419041)

        I've got a 640k hard drive to sell to you.

        • (Score: 1) by Francis on Thursday October 27 2016, @12:25AM

          by Francis (5544) on Thursday October 27 2016, @12:25AM (#419217)

          Ah, this chestnut again. I take it you're not aware that nobody ever said that at the time. They were referring to the break up of the first 1mb into 640kb of low mem and the rest being separated. They did it at the time because they lacked the address space for all of it. And yes, at the time, 640kb of low mem was enough for anybody, it just required some hackery at times to get things loaded into high and extended memory by the end of the DOS era.

          In this case, if you're using a laptop and you're needing more than 1.2tb of space, you're very much in the minority and you're certainly going to want to back that stuff up even more regularly than usual. More likely, if you need that much data you're going to be using a desktop and have options for RAID in place.

          I suppose there's somebody out there that needs more than that space for a laptop, but it's crazy to suggest that there's a market for that at this time. Eventually, I'm sure people will need that, but that's not going to be for years. Even a large game is less than 25gb in most cases and they tend to destroy laptops with the excessive heat.

          • (Score: 2) by bob_super on Thursday October 27 2016, @12:45AM

            by bob_super (1357) on Thursday October 27 2016, @12:45AM (#419225)

            While my primary desktop has been happy with 300GB for quite some time, my new pro laptop has a 4k screen and all the oomph to do some 4k video or 3D editing. Those files do take an enormous amount of space. How many people really need that much storage? few. How many people will buy the biggest anyway, just in case they want to backup all their useless "this is my appetizer, in 4k because my phone can do it" video? More than should...

            • (Score: 1) by Francis on Thursday October 27 2016, @01:18AM

              by Francis (5544) on Thursday October 27 2016, @01:18AM (#419233)

              In other words, you didn't actually read my post and would rather post some nonsense.

              • (Score: 2) by bob_super on Thursday October 27 2016, @04:18PM

                by bob_super (1357) on Thursday October 27 2016, @04:18PM (#419458)

                Or you have no clue about the growth of storage requirements and the hoarding people do.
                My first 1G drive seemed huge, but barely held a CD's worth of data.
                My first 40G drive was enormous, but only fit so many games after a while, and not too many HD videos.
                My current 300G drives are gigantic, but it turns out you need more than 10% of that for each AAA game...

                So, is there a market for a 2TB drive? Sure, because that will seem only so-so in 3 to 5 years when 4k videos and 250G games are the norm.

                Back to my original comment: 640GB ought to be enough for anyone... right?

          • (Score: 3, Informative) by Geotti on Thursday October 27 2016, @04:26AM

            by Geotti (1146) on Thursday October 27 2016, @04:26AM (#419280) Journal

            Ok, so just the Vienna Symphonic Library [vsl.co.at] alone weighs over 960GB. Komplete [native-instruments.com] requires another 155 gigs, add up a few sample archives and other instruments and plugins and you're at well over 1.5TB just for your production rig. Then, of course, you need space to for recordings, conversions, your files, etc. And this is just music.
            What if you do Music, Video, DTP and 3D (which is not that uncommon)? What, if you also need Matlab & Co., several IDEs and a few virtual machines?
            If I could fit all of that on a laptop, I'd be one happy camper... Unfortunately, Apple fucked me over by eliminating the optical bay and any ability to add a second drive, so my next laptop will probably be a hackbook pro that does have space for a second (and maybe a third and fourth "drive").

            Oh and I know enough people that have a multi-terabyte video and/or music collection, be it for on-stage or home purposes.

    • (Score: 2) by TheRaven on Wednesday October 26 2016, @03:45PM

      by TheRaven (270) on Wednesday October 26 2016, @03:45PM (#419026) Journal
      It's not made sense to put spinning rust into a laptop for ages. Laptops are moved around a lot and using a gyroscope that has to have a blade held a few atoms above the surface seems a terrible idea. My current one is now three years old and has a 1TB SSD. It replaced one two years older with a 256GB SSD. The SSD in the older one had a fairly hefty price premium, the newer one was a fairly small fraction of the total. Spinning rust only survives at the very low end of the laptop market (and with the volumes of tablets, it's hard there - with 256GB of flash in a high-end tablet, the economies of scale are pushing the price down). The big problem for hard disks at the low end is that they come with a lot of fixed costs. SSD controllers are a tiny fraction of the total cost and you can scale the cost of an SSD down almost linearly with capacity. A hard drive half the capacity is only going to be half the cost until you get to the point where the mechanical parts are dominating the total cost.
      --
      sudo mod me up
    • (Score: 2) by PocketSizeSUn on Wednesday October 26 2016, @06:40PM

      by PocketSizeSUn (5340) on Wednesday October 26 2016, @06:40PM (#419086)

      It's physics. We can not write at any higher density w/o worse comprises.
        - Helium filled (get the read/write heads closer to the media and more stable)
        - Heat assisted (probably requires a low duty cycle or a seriously problematic heat-sink for laptops): https://en.wikipedia.org/wiki/Heat-assisted_magnetic_recording [wikipedia.org]
        - Bit patterned instead of film (Isolate the Fe to pre-positioned bits that can be more written/read individually): https://en.wikipedia.org/wiki/Patterned_media [wikipedia.org]

      So helium may get your 2TB to 2.4TB.
      The 10-20% extra cost for .2TB isn't going to entice any sales.
      Adding SMR may get you to 3TB, but the dramatic drop in write performance (treating it as a conventional drive) means that there has to be a large amount of effort put into re-architecting the storage sub-system and/or switching to log-structured file systems which all have different tradeoffs that nobody really seems happy with.

      For SSDs in laptops the density limit is probably in the 16-32TB range today, the only limiting factor is really cost and the fabs to make it.

      • (Score: 2) by takyon on Wednesday October 26 2016, @07:05PM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday October 26 2016, @07:05PM (#419096) Journal

        - Heat assisted (probably requires a low duty cycle or a seriously problematic heat-sink for laptops): https://en.wikipedia.org/wiki/Heat-assisted_magnetic_recording [wikipedia.org] [wikipedia.org]

        I haven't seen evidence that HAMR will create an extremely hot drive that will require some problematic heat-sink in laptops. We are talking about high temperatures focused by a laser on nanoscale areas on the disk platter. If anything kills HAMR, it will be the economics required to switch from PMR, while competition from SSDs "heats up".

        Some SSDs run 10x hotter than other SSDs for whatever reasons.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 3, Insightful) by Unixnut on Wednesday October 26 2016, @11:02AM

    by Unixnut (5779) on Wednesday October 26 2016, @11:02AM (#418922)

    While a 5TB 2,5inch is impressive, It feels a bit gimmicky.

    As they mention, at 15mm depth it is too big to fit in existing slots for 2,5 disks (Although it could fit in my old 486DX4 laptop, as that is a thick 2,5 as well) , so you cannot upgrade your existing disk with this one. Also, its write speed is slow compared to modern drives, so a step back for that.

    What it does do, is pack a lot of data per platter, which makes it ideal for backups/archiving. For that I don't mind big disks, I have an array of disks for day to day work, and I have two 6TB 3.5inch drives that are backed up to and then sent off site.

    If they could fit say 10TB in a 3.5 inch disk, that would be excellent for needs such as mine (which I don't think are that obscure). You have your SSD/Disk based performance array, and then you back everything up to a single 3.5 disk, which you then store offsite. You don't care about write performance because it is in the background, and you only need storage and read performance really.

    What I don't like, is having to switch disks multiple times in a backup session because I can only fit part of the array per disk. So the bigger single drive they can provide, the better. Hell, they can bring back the 8 inch full height drive dimensions for what it mattered to me, the storage costs are the same either way.

    For the small/mobile market SSDs are already eating hard disks lunch (especially with remote storage, "Cloud" and everything else). They might as well concentrate on big back-end storage and archival systems, where the drive being a 2.5 inch isn't really relevant.

    • (Score: 1) by pTamok on Wednesday October 26 2016, @12:00PM

      by pTamok (3042) on Wednesday October 26 2016, @12:00PM (#418937)

      I agree, and thinking about how I'd like storage to behave in my laptop, I've realised I want two types of internal storage:

      1) Ephemeral storage, for things that get overwritten on a frequent basis. So storage for a Swap partition, and /var. I/O performance needs to be very good, and the medium needs to be able to cope with frequent overwrites.
      2) Medium-term storage, for things that don't get overwritten that often - documents, audio files, video files. I/O performance needs to be good, but the medium only needs to be able to cope with a reasonable number of overwrites.

      External to the laptop, I want a third type of storage:

      3) Long-term storage for backups. I/O performance needs to be good enough to allow backups and restores in reasonable time - so good for serial writing, but not so good for random seeking of data (sounds a bit like a tape). A small number of overwrites possible. Data longevity of crucial importance - I should be able to leave it and come back in 30 years and still be able to read it.

      I don't see a good match with current technology.

      Type (1) could be battery-backed RAM, which gets dumped to type (2) storage on controlled shutdown. It is battery backed to allow for uncontrolled shutdowns. 3D Crosspoint memory might be ideal.
      Type (2) could be SSD.
      Type (3) could be a hard disk - but I'm not sure of the data longevity side of things. HDDs need to be spun up every so often, there is no guarantee the drive electronics will work, and I'm not sure how long the magnetic domains on the highest density disks are good for. I have 40-year old cheap paperback books that are easily readable, and that is low tech. There does not appear top be a high-tech equivalent that 'just works' at any reasonable storage density. There are one or two companies offering archival quality Blu-ray data disks, but the disks don't hold that much data compared to modern SSDs and HDDs.

      • (Score: 2) by Scruffy Beard 2 on Wednesday October 26 2016, @12:56PM

        by Scruffy Beard 2 (6030) on Wednesday October 26 2016, @12:56PM (#418951)

        I may write up my triple-redundant backup server if I ever get it working.

        It would use ZFS, and as a result require 4GB of ECC memory.

        The triple redundancy allows you to recover from a hard-disk failure without losing redundancy. ZFS uses the magic of cryptographic hashing to figure out which versions of your data are good. It may be smart to use 3 different brands of the same capacity if you can find them (triple-redundant storage will match the smallest drive, assuming 3 drives). You can upgrade to larger drives by adding a 4th one in one at a time.

        I am currently planning to use a laptop drive in an enclosure to transfer the serialized data over (possibly in encrypted form using public key encryption -- dedicated CPU on back-up server makes that possible). I also have some DVD-RAM disk I bough to do incremental back-ups, but never used (so that I could then save DVD+R disks for 4.7GB increments -- Failed in testing, possibly due to excessive buffer under-runs).

        • (Score: 2) by Unixnut on Wednesday October 26 2016, @01:20PM

          by Unixnut (5779) on Wednesday October 26 2016, @01:20PM (#418959)

          ZFS is not a backup solution. Don't get me wrong, ZFS is awesome. I am running it for any storage server I build with FreeBSD. My current SOHO setup is a 12TB array, 2x (2x3TB raidz2 3.5 drives) zvols, and my previous was 2 x ( 4x1TB raidz1 2.5 drives). Both with 128GB SSD backed read/write cache, so if your data set fits in 128GB, you are flying at 300+ MB/s, on cheap "consumer" hardware. All my VMs on the server can max out on IO, and I can still max out my home gigabit Ethernet with I/O and not notice a slowdown. Most impressive.

          However none of that will save you if the entire array goes offline. My previous ZFS array was toasted when a power surge blew out 5 out of the 8 drives (and the raid card, and the motherboard). No amount of ZFS block level redundancy will recover that data, but thankfully I had the backup on the single 6TB 3.5 at the time.

          That is why I always have backups now on a single disk, and I look forward to future 12TB 3.5 disks so I can fit a backup of the entire array onto it. Don't mistake redundant storage systems for backups, lest you end up regretting it when the unthinkable happens.

          Also, I went with the same size disks, but across 4 brands. So for example, my 8 x 2.5 array was 2 x Toshiba, 2 x HGST, 2x Samsung and 2 x WD, one of each brand in each raidz1 zvol. That way if there was a bad production batch, it would not affect more than 1 drive per volume, which the raidz1 would protect against.

          Also, you can't add drives to existing zvols. So if you have one (3x1TB raidz1) zvol, you would have to create another zvol and add it to the zpool. It isn't like Linux software raid, where you can add drives, or increase drive capacity, and then grow the array. One of the limitations of ZFS.

          Also, in addition to block level hash checksumming, you can instruct zfs to keep multiple copies of your data. So if you set it to "2", each file will use double the space (because there are two copies), but double the redundancy, on top of whatever raidZ$X level you are using.

          • (Score: 2) by Scruffy Beard 2 on Wednesday October 26 2016, @03:31PM

            by Scruffy Beard 2 (6030) on Wednesday October 26 2016, @03:31PM (#419020)

            Your comments, while informative, don't invalidate my concept.

            You are simply restating the classic "RAID is not a backup solution".

            The problem with using a single disk as a back-up solution is that hard-disks will not always return correct data.* Modern drives are only rated to 10^14 bits without non-recoverable errors [seagate.com]. The works out to 125TB of data read. *(I was off by 3 orders of magnitude in saying that modern drives can't even read their entire capacity.

            With my scheme, I will have to store about 5 copies of my important data. 3 in the off-site back-up server, and (optionally) 2 copies in my workstation.

            Essentially, I treat the back-up server as a more capable hard-drive for back-up purposes. The beauty of it is that I don't even need to back up that server's encryption keys. If the server experiences a catastrophic failure, all of the data stored there is gone anyway (so the keys would be useless). A second backup server in a third location is an option for the truly paranoid.

          • (Score: 2) by TheRaven on Wednesday October 26 2016, @03:59PM

            by TheRaven (270) on Wednesday October 26 2016, @03:59PM (#419029) Journal

            ZFS is not a backup solution

            That's half true. ZFS on a single machine is not a backup solution. ZFS on a NAS can be a backup solution for your other machines. ZFS on a pair of machines where you use zfs send to send periodic updates to the other can be a backup solution.

            Also, you can't add drives to existing zvols.

            I think you're misusing some terminology here. A zvol is a thing that looks like a block device, but is backed by ZFS rather than being directly backed by some physical storage. You can't expand them beyond their initial logical size, but they do support overcommit so you can create ones that are larger than the existing physical storage. A ZFS storage pool is made of vdevs, which are either individual disks (or files on a disk, or some other block device, such as a GELI device or other GEOM provider on FreeBSD), mirror or RAID-Z sets. You can't add new disks to a vdev after you've created it, but you can add new vdevs to a pool. For example, if you have three 2TB disks in a single-redundancy RAID-Z vdevs, then you can upgrade your pool by adding a new vdev that contains three 4TB disks in a RAID-Z vdev. You can alternatively replace each disk in the RAID-Z vdev with a larger one in turn and resilver the vdev. Once they're all replaced, you can increase the size of the vdev. Often the simplest way of migrating is to add a new pool and use zfs send | zfs receive on each filesystem in turn to move them onto the new disk. You can do this with snapshots so you copy most things, then unmount, copy the recent changes, then remount, so you only get a little bit of downtime.

            --
            sudo mod me up
            • (Score: 2) by Unixnut on Wednesday October 26 2016, @05:37PM

              by Unixnut (5779) on Wednesday October 26 2016, @05:37PM (#419067)

              Yes, apologies, I meant vdev's. Been a long few days for me :-)

              Agree with you otherwise on all your points. One thing though:

              "You can alternatively replace each disk in the RAID-Z vdev with a larger one in turn and resilver the vdev. Once they're all replaced, you can increase the size of the vdev"

              How would you increase the size of the vdev? I was under the impression the above is not possible with ZFS. Would be interesting to have a read up on it.

              • (Score: 2) by TheRaven on Wednesday October 26 2016, @06:29PM

                by TheRaven (270) on Wednesday October 26 2016, @06:29PM (#419083) Journal

                How would you increase the size of the vdev? I was under the impression the above is not possible with ZFS. Would be interesting to have a read up on it.

                I've not done this, but I believe it happens automatically. With RAID-Z, the size of the vdev is defined by the size of the smallest disk. If you replace all of the disks with larger ones, then I think that it increases automatically. The caveat with this is that you must replace each disk one at a time and wait for a resilver to occur. You get reduced (i.e. no) redundancy with this while the resilver is running (which can take a couple of days per disk) and you are really hammering the disks while you do it.

                When I upgraded my NAS, I decided I wanted to turn on dedup for a lot more stuff, move to lz4 compression and change the hash function that I was using, so it was easier to just pop the old disks in a spare machine, do a fresh install on the new disks, and then zfs send | zfs receive the data over a GigE network cable. It took a couple of hours to get the core stuff there and a couple of days to finish it all off, but that was mostly because it's a comparatively slow machine (I mostly access it over WiFi, so performance isn't really an issue - the WiFi is the bottleneck) and I was deduplicating a bunch of data. Somewhat ironically, after doing the recompression and the deduplication, my space usage was down enough that everything fitted quite comfortably onto the old disks.

                --
                sudo mod me up
      • (Score: 0) by Anonymous Coward on Wednesday October 26 2016, @01:14PM

        by Anonymous Coward on Wednesday October 26 2016, @01:14PM (#418957)

        I have spun up hard drives dating back up to 20 years (including drives with damaged seeking mechanisms for the heads!) Anything that didn't have a motor failure/seize and was kept in dryish conditions (garage with variable humidity in the 20-40 percent range) has survived just fine. While there are certainly issues possible with older hardware, hard disks by and large will survive just fine for decades at a time if sufficiently protected. Being magnetic you may lose some bits over time, but that is where you are best off including duplicate drives or ecc repair files to fill in the bitrot gaps over time.

        That said, if they can get 100GB M-Discs down to a dollar or two apiece I should be set for the long term WORM data storage, and if they can get 1TB ones out I should be set pretty much forever. (Outside of disk images for recovery or nostalgia, nothing I have file-wise exceeds a few gigs. The exceptions to that will be 4k/8k video feeds, most of which would still fit on a 1TB disc if such a thing is made.

        • (Score: 2) by Unixnut on Wednesday October 26 2016, @01:29PM

          by Unixnut (5779) on Wednesday October 26 2016, @01:29PM (#418962)

          Yes, but old disks had wider tracks, and stronger fields imprinted on them. Look at things like SMR, and you see that they are deliberately packing bits close enough so that they cause interference. That is why they have to rewrite all adjacent tracks as well as the one you are writing to (hence the poor write performance).

          If writing to a track in normal operation causes corruption to the surrounding tracks, I am unsure whether the bits will survive in storage powered down, especially without the firmware running to do an occasional refresh of the data. In my case it is not the end of the world, because the backup drive will not be powered down for longer than a month (at which point we checksum and copy data back to it anyway), but for long term archival, I am not sure these disks would be suited, but I am sure time will tell, as people start using/testing the technology in archival situations.

          Funny you mention that. I found my old Quantum fireball 3.5 drive recently (from one of my desktops when I was a kid). 4GB and noisy as hell, but powered it up and all the data seems to be on it just fine. Not sure what to do with it, seems a waste to throw it away or break it for parts, but frankly you can get USB keys that have more storage capacity. Feels like it should belong in a museum, lol.

      • (Score: 2) by Unixnut on Wednesday October 26 2016, @01:40PM

        by Unixnut (5779) on Wednesday October 26 2016, @01:40PM (#418965)

        This is how I have it set up:

        1) I use SSDs. My laptop is SSD, as is my desktop. I was worried about the excessive writes, but my current 128GB SSD has been going strong for years. I had one (OCZ) SSD fail on me due to excessive writes, but when it failed, it failed read-only, so just transferred the data to a new one and carried on.

        The other SSDs I have setup are Sandisk (SanDisk SDSSDP064G). I even use two Sandisks as R/W cache on my ZFS array. Unless you are really swapping like mad, you should not see problems with writes*

        2) I use My ZFS storage server, 12TB Array with redundancy (see my other reply below parent for all the ZFS details). Can connect to it via gigabit ethernet, 150MB wifi, or over the public internet (I paid a lot of money for Fibre, so get 100mbit/s). It is so fast that I forget I am not dealing with local storage when I am home.

        For the laptop, I use the SSD, or just pull in what I need for the project from my home server. I also have an external 1TB USB drive I carry around as and when needed.

        3) I use external disks, and a USB3 caddy. Have a script which automatically detects if the disk I have inserted has the specific backup labels I marked them with, and just kicks off automatically. Then sends me an email, I pull out the disk and take it off site. It has saved my files on two occasions in the last 5 years.

        * I did tune my Linux laptop and desktop for SSDs though, used ext4, enabled TRIM, disabled atime, etc... to reduce write loads.

        • (Score: 1) by Francis on Wednesday October 26 2016, @02:45PM

          by Francis (5544) on Wednesday October 26 2016, @02:45PM (#419003)

          I wouldn't recommend that unless the disks are have tons of empty space and you're not making constant backups. You can easily wear out a regular HDD with ZFS if you allow it to get too full through too much snapshotting.

    • (Score: 2) by bob_super on Wednesday October 26 2016, @05:04PM

      by bob_super (1357) on Wednesday October 26 2016, @05:04PM (#419046)

      There are lots of benefits to the 2.5" size, even if the drive is a bit tall. Think of those companies who have hundreds of drives running in racks or with robotic arms. 3.5" form factor makes a huge difference.

      • (Score: 2) by Unixnut on Wednesday October 26 2016, @05:43PM

        by Unixnut (5779) on Wednesday October 26 2016, @05:43PM (#419070)

        Yes, I know, I have worked with them. stacks of hundreds upon hundreds of drives per storage unit. A lot of places I know (and have worked at) have been moving away from 2.5 to 3.5 for storage. Back in the day, the smaller size of the 2.5 resulted in lower latency when seeking, and being able to pack more spindles in a machine, which would improve IOPS.

        However now that we have SSDs taking care of the IOP intensive tasks, the increased mechanical failures, the increased heat generated, the increased power requirements per GB and the increased DC engineer time in replacing more disks per GB outweighed the benefits 2.5 brought.

        Hence the move to fewer, larger 3.5 inch disks. I admit I have never seen disk based robotic arms, but I am sure they exist. Saying that, they have been handling tapes for decades, and those are closer to 3.5 inch disks in size than 2.5 inch, so I think they will be fine.

    • (Score: 2) by darkfeline on Wednesday October 26 2016, @05:19PM

      by darkfeline (1030) on Wednesday October 26 2016, @05:19PM (#419053) Homepage

      I think 10TB is too much storage for one disk. Simply reading all of it (e.g., for backup) is going to take days, and guarantee read errors due to the sheer size. Much better to stick with smaller disks and RAID 0 them. Of course, SSDs have the potential to go bigger.

      --
      Join the SDF Public Access UNIX System today!
      • (Score: 2) by Unixnut on Wednesday October 26 2016, @05:52PM

        by Unixnut (5779) on Wednesday October 26 2016, @05:52PM (#419074)

        I think 10TB is too much storage for one disk. Simply reading all of it (e.g., for backup) is going to take days,

        As long as it takes about the same time as a tape drive, people will be ok with it. In these situations streaming read performance is all that matters, and I don't see why that would be much longer than with a smaller size drive. As per the article, writes are slow, but reads are as fast as any other disk.

        "and guarantee read errors due to the sheer size."

        Possibly, which is what I alluded to in another reply. However my data will be refreshed every month, and I am sure the drive will be able to store data for a month at least. We have yet to see how it will perform in an archival situation.

        " Much better to stick with smaller disks and RAID 0 them. Of course, SSDs have the potential to go bigger."

        RAID0 seriously? So if we have 2 disks we halve the MTBF (i.e. double the risk of failure) while leaving your data completely unprotected? Sounds like a crazy idea, that gets worse as you add more disks to the raid0 array. Safer with one big disk quite frankly.

        And we will see with SSDs, they are already hitting limits in the MLC technology, with bits packed so close together they cannot be sure writes will not affect nearby bits, or issues with stray electrons flipping bits. Not to mention that for the moment, SSDs cannot compete on a cost per GB with disks. We shall see what the future holds for both technologies though.

    • (Score: 2) by PocketSizeSUn on Wednesday October 26 2016, @06:46PM

      by PocketSizeSUn (5340) on Wednesday October 26 2016, @06:46PM (#419088)

      It's still good for external low powered portable storage. Powering a 3.5 in form factor from a laptop's USB is still a bit problematic although USB C should solve that the 3.5 in drives are still (usually) too bulky.

    • (Score: 2) by PocketSizeSUn on Wednesday October 26 2016, @06:52PM

      by PocketSizeSUn (5340) on Wednesday October 26 2016, @06:52PM (#419090)

      10TB 3.5in drives are already available in both conventional (PMR) and SMR configurations.

    • (Score: 2) by takyon on Wednesday October 26 2016, @07:09PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday October 26 2016, @07:09PM (#419098) Journal

      Like PocketSizeSUn mentioned, 10 TB in the 3.5" form factor has been out for some time now. The first models were for the enterprise and datacenters but there may be a consumer version "floating" around (8 TB is a capacity I know consumers can get easily).

      https://www.google.com/?gws_rd=ssl#q=site:soylentnews.org+10 [google.com] tb

      The next capacity to wait for is 12 TB or 16 TB.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
  • (Score: 0) by Anonymous Coward on Wednesday October 26 2016, @03:54PM

    by Anonymous Coward on Wednesday October 26 2016, @03:54PM (#419028)

    Why do they claim this is the 2.5" form factor when the drives are 15mm tall? It may be literally true that it is a 2.5" wide, but clearly when people talk about the drive size they are referring to the overall drive. I was excited thinking that this would be a new drop-in replacement for laptops, when it clearly is not. I could easily imagine somebody else making the same mistake and buying one.

    Using their same logic, couldn't one easily create a 200TB drive in the 2.5" form factor (ignore that it is 1 meter tall, and 2 meters long)?

    • (Score: 4, Informative) by fishybell on Wednesday October 26 2016, @05:29PM

      by fishybell (3156) on Wednesday October 26 2016, @05:29PM (#419062)

      There are really two 2.5" standards: the laptop size (including the thin laptop size) and the server size. These are clearly the latter.

    • (Score: 0) by Anonymous Coward on Wednesday November 02 2016, @08:53PM

      by Anonymous Coward on Wednesday November 02 2016, @08:53PM (#421827)

      Old 2.5" drives actually were that thick. It isn't hard drive makers trying to redefine the 2.5" standard, but rather make use of an already existing less common variant. They started to get thinner when manufacturers started making thinner laptops and needed drives for them. It is still substantially smaller than a 3.5" drive, and there is a use for them.

  • (Score: 1) by Didz on Friday October 28 2016, @10:27PM

    by Didz (1336) Subscriber Badge on Friday October 28 2016, @10:27PM (#419978) Homepage

    I have a Toshiba Satellite Pro 460CDT from the mid 90s that came with a 2GB hard drive. It is much thicker than the spare 40GB drive I put in it let alone the ones out today. You could just about fit 2 thin drives of today in the slot.

    Shame the BIOS can only use the first 8GB of it.