Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Wednesday September 13 2017, @06:18PM   Printer-friendly
from the defrag-with-windex dept.

Using a glass substrate instead of aluminum could allow 12 platters to be crammed into a 3.5" hard disk drive enclosure:

Even if many modern systems eschew classic hard drive storage designs in favor of solid state alternatives, there are still a number of companies working on improving the technology. One of those is Hoya, which is currently prototyping glass substrates for hard drive platters of the future which could enable the production of drives with as much as 20TB of storage space.

Hard drive platters are traditionally produced using aluminum substrates. While these substrates have enabled many modern advances in hard drive technology, glass substrates can be made with similar densities, but can be much thinner, leading to higher capacity storage drives. Hoya has already managed the creation of substrates as thin as 0.381mm, which is close to half the thickness of existing high-density drives.

In one cited example, an existing 12-terabyte drive from Western Digital was made up of eight platters. Hoya believes that by decreasing the thickness of the platters through its glass technology, it could fit as many as 12 inside a 3.5 inch hard drive casing. That would enable up to 18TB of storage space in a single drive (thanks Nikkei).

When that is blended with a technology known as "shingled magnetic recording," 20TB should be perfectly achievable.

Toshiba is reportedly planning to release a 14 TB helium-filled hard drive by the end of the year.

Also at Network World.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Interesting) by Anonymous Coward on Wednesday September 13 2017, @06:38PM (16 children)

    by Anonymous Coward on Wednesday September 13 2017, @06:38PM (#567369)

    I guess officially it was the DeskStar or something like that.

    It had glass platters. It worked great, until suddenly it didn't. Opening a dead drive would reveal clear glass platters and lots of dust. It seems that cascading failure would cause the entire surface layer to come free from the platters. A bit of the surface comes free, and then more, and before long there is NO coating left at all. It was sudden and dramatic.

    Coatings don't stick well to glass.

    • (Score: 5, Interesting) by edIII on Wednesday September 13 2017, @06:51PM (13 children)

      by edIII (791) on Wednesday September 13 2017, @06:51PM (#567382)

      I now have the same experience with SSDs. Had a server crap out because there were no more writes left in the SSD. Completely exhausted the poor thing, and Linux didn't react well to the sudden read-only nature of its storage. Since most of the servers were provisioned at the same time.....

      Not fun.

      For servers I now prefer spinning disk since SSDs are really not an option anymore. At least for the OS. Virtualization can mitigate that with live migration and high availability, but that doesn't apply to embedded and bare metal systems. Looking into memory disk solutions to bootstrap the OS from spinning disk into memory and push data every so often to the spinning disk for backup. When spinning disks are not around anymore, I'll do it from SSD to memory instead. Either that or the buddy system (Raid 1).

      What I won't do is trust that an SSD can last forever in a server, or even as long as a spinning disk. That, and FFS, started monitoring the SMART statuses for how much life is left in them. Even Wile E Coyote and Daffy Duck had more graceful failures than SSDs do.

      We keep hearing about new tech to vastly increase the number of writes, and with NVME, you need it.

      --
      Technically, lunchtime is at any moment. It's just a wave function.
      • (Score: 3, Interesting) by Anonymous Coward on Wednesday September 13 2017, @11:38PM (1 child)

        by Anonymous Coward on Wednesday September 13 2017, @11:38PM (#567530)

        The good ones go read-only when they are spent. You can get your data off. (AFAIK, this is Intel's policy) Most SSDs are not good...

        The bad ones just lock up. At boot, the BIOS won't even see the drive.

        The really bad ones silently change your data.

        • (Score: 0) by Anonymous Coward on Thursday September 14 2017, @04:21AM

          by Anonymous Coward on Thursday September 14 2017, @04:21AM (#567635)

          The really bad ones silently change your data.

          I see you've heard of OCZ.

      • (Score: 2) by coolgopher on Thursday September 14 2017, @04:09AM (3 children)

        by coolgopher (1157) on Thursday September 14 2017, @04:09AM (#567632)

        With embedded you should be running your OS with a read-only root, and have carefully dimensioned your storage writes to last the intended life time of the device. And, you know, fail gracefully when you inevitably run out of writes ahead of time :)

        • (Score: 2) by edIII on Thursday September 14 2017, @05:24AM (2 children)

          by edIII (791) on Thursday September 14 2017, @05:24AM (#567653)

          Yes. Hindsight is 20/20. :)

          I'm doing so few writes to disk now that the lifetime is suitable.

          And, you know, fail gracefully when you inevitably run out of writes ahead of time :)

          How? I can make my code do that, but my impression was that there was a deeper problem in the operating system and a little corruption. Grace failure was handled by redundant devices, but when they go within hours of each other...

          I'm not an expert at the underlying system, so any suggestions are welcome. I also got the impression from the other poster that SSDs can fail in different ways, some of them I can't handle gracefully :)

          --
          Technically, lunchtime is at any moment. It's just a wave function.
          • (Score: 2) by coolgopher on Thursday September 14 2017, @06:13AM (1 child)

            by coolgopher (1157) on Thursday September 14 2017, @06:13AM (#567665)

            Barring the SSD going bonkers on you, if you mount your storage partition with the appropriate options to remount-readonly on error, you can have a watch for that happening and raise whatever type alarm is applicable. In the meanwhile, your other apps will get EPERM or some such when they're trying to write, and as long as they can handle that sanely, you should at least be able to get a message back to base to say "hey, this unit is just about dead, come fix me!".

            • (Score: 2) by edIII on Thursday September 14 2017, @08:53PM

              by edIII (791) on Thursday September 14 2017, @08:53PM (#568078)

              Thanks for the suggestions :)

              --
              Technically, lunchtime is at any moment. It's just a wave function.
      • (Score: 2) by TheRaven on Thursday September 14 2017, @07:43AM (6 children)

        by TheRaven (270) on Thursday September 14 2017, @07:43AM (#567680) Journal

        What I won't do is trust that an SSD can last forever in a server, or even as long as a spinning disk.

        It's been about five years since the average SSD lifetime passed the average spinning rust disk lifetime. Hard disks also fail catastrophically losing all of the data. Get a tiny bit of dust under the head and you can completely destroy all of the data on the disk in a few minutes as the head scrapes across the platter. There are a lot of other failure modes. If you've not encountered these in the wild then you're either responsible for a very small number of machines with hard drives or you're very, very lucky.

        --
        sudo mod me up
        • (Score: 2) by takyon on Thursday September 14 2017, @07:44AM (1 child)

          by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday September 14 2017, @07:44AM (#567681) Journal

          Buy the new helium-filled hard drives. I'd like to see dust try and get in there.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 2) by TheRaven on Thursday September 14 2017, @09:02AM

            by TheRaven (270) on Thursday September 14 2017, @09:02AM (#567702) Journal
            Dust typically doesn't get in, it's more often already there as a result of failures in the clean room process during assembly, and becomes dislodged during operation. This is just as possible with helium-filled drives. Being sufficiently enclosed that dust can't get in is orders of magnitude easier than being helium-tight. You can see how quickly helium leaks out of a balloon by trying to light a helium balloon with a match: the escaping helium through the skin extinguishes the match - don't try this with a hydrogen balloon! No one (except very small children, for whom it is a tragedy) cares that a helium balloon goes flat in a few days, but when a hard drive depends on helium and not air being present for a multi-year operating lifetime, that's a really narrow design tolerance for the enclosure. Narrow design tolerances translate to new and exciting failure modes.
            --
            sudo mod me up
        • (Score: 2) by edIII on Thursday September 14 2017, @08:52PM (3 children)

          by edIII (791) on Thursday September 14 2017, @08:52PM (#568076)

          Come now, you don't need to unduly denigrate me, and I've experienced quite a few other failure modes. SSD is a bit different. As for experience, I've had spinning drives operating 10 years or more before failure, and some enterprise expensive drives win the fucking lottery for MTBF and die early and spectacularly. One recovery engineer once described the surface of the hard disk as Apocalypse Now. So much for enterprise quality.

          Lifetime with SSD is pretty much irrelevant. It's all about disk writes, and that is the problem with SSD. With a hard drive it is only *possible* that it will fail within 5-10 years. With an SSD it is as certain as death and taxes that it will eventually die. In fact, it's much like a human in that there are only so many beats of the heart, only so many breaths.....

          What bothers me about SSD is that all it could take is a malicious program (or just an unthinking sysadmin) eating up writes over a few months and your device lifetime just took a dump. All of my SSDs are now monitored for SMART status and writes left. I use the standard of deviation to attempt to predict how much life is left in the drives. Looking forward to new generations of SSD that vastly increase the number of writes possible. At that point, I won't be as worried about creating a database server on one. It's worth noting that even with RAID 1 that both of the SSDs suffer from the malicious writes at the same time, and both will die within a short time period together.

          --
          Technically, lunchtime is at any moment. It's just a wave function.
          • (Score: 2) by TheRaven on Friday September 15 2017, @09:17AM (2 children)

            by TheRaven (270) on Friday September 15 2017, @09:17AM (#568345) Journal

            What bothers me about SSD is that all it could take is a malicious program (or just an unthinking sysadmin) eating up writes over a few months and your device lifetime just took a dump

            That's not really likely. Consider a device that has 1,000 rewrite cycles per cell (pretty low for modern devices). You have 1TB of space. If you assume perfect wear levelling (for a minute) then that gives you 1,000TB of writes. If you can do 300MB/s of writes, then it takes about six years of sustained writes at the drive's maximum write speed to wear out the cells. In practice, you can't do this, because once you've written 1TB (even deleting and using TRIM) the garbage collector will be running slower than the writes and this will cause back pressure on the interface. If your device is 2TB, then the lifetime with the same number of rewrites per cell doubles.

            Now, the assumption that wear levelling is perfect is wrong, but modern controllers give about 80-90% of the equivalent, so we're still talking a good 3-4 years of solid sustained writes for the cells to wear out and the lifetime scales almost linearly with the capacity - double the capacity and it will take twice as long to write each cell.

            It's not like the old SSDs that did little or no wear levelling, where writing a single block repeatedly could kill that block and you'd need the filesystem to work around that.

            It's also worth noting that the death is not guaranteed, it's probabilistic for each cell. Over a RAID-1 pair, it's actually quite unlikely that they'll die at exactly the same time, because that requires them to run out of spare cells to remap at the same time, which relies on a bunch of quantum effects happening at exactly the same rate for both drives. That can happen, but it's not nearly as likely as you suggest. There's also often some (deliberate) nondeterminism in the remapping, so the exact pattern of writes won't actually be the same, even in a RAID-1 pair.

            --
            sudo mod me up
            • (Score: 2) by edIII on Friday September 15 2017, @07:37PM (1 child)

              by edIII (791) on Friday September 15 2017, @07:37PM (#568654)

              I like your points, but we are not talking 1TB. The costs are still way to high. Try 64GB (most common in production), 128GB, or maybe 256GB (Although, I don't know of a single one in production). There were a huge number of writes going on, some logging left on verbose from development. Failure was not over a couple of months, and that was perhaps a bit exaggerated my worry about malicious use. I experienced failure within 18 months, but the drive had been in production for maybe a year before that. Since I wasn't the sysadmin that put any of them together, it never occurred to me to worry about the SSD and how many writes were occurring. I just said thank you and moved on to provisioning it further for services :)

              Working on some 1TB NVME's right now. You're correct, I'm less worried about those. Even more so since they are RAID-1. I did not know it was probabilistic. Thanks for pointing that out.

              It's also worth noting that the death is not guaranteed, it's probabilistic for each cell. Over a RAID-1 pair, it's actually quite unlikely that they'll die at exactly the same time.........

              Yeah, well like I said, the MTBF lottery winner right here :) Six enterprise expensive-ass SAS drives all failed simultaneously within 2.5 years of being put into production. Every. Single. Drive. Major surface damage according to Drive Savers. So... after that little experience I tend to view MTBF a bit more cynically.

              Thank you for your post. I do actually feel better about it.

              --
              Technically, lunchtime is at any moment. It's just a wave function.
              • (Score: 2) by TheRaven on Monday September 18 2017, @09:55AM

                by TheRaven (270) on Monday September 18 2017, @09:55AM (#569681) Journal

                I like your points, but we are not talking 1TB. The costs are still way to high. Try 64GB (most common in production), 128GB, or maybe 256GB (Although, I don't know of a single one in production).

                My 4-year-old laptop has a 1TB SSD and most of our build machines typically have 512GB SSDs that are used with ZFS as log and cache devices for RAID-1 disks (stuff rarely needs reading from the disks, because the SSDs are large enough for the working set). 64GB is a really odd place for the cost-benefit calculation to win. I'm not even sure where you'd buy them anymore. A quick look shows 128GB SSDs costing around £50, with 256GB costing around 50% more, 512GB around double that, and 1TB around 60% more than that, so 1TB comes pretty close to the sweet spot. That said, you don't buy SSDs at all if capacity is your bottleneck, you buy them if IOPS is your bottleneck and in that case the 1TB drives are very cheap in comparison to anything else on the market (and NVMe is even cheaper).

                --
                sudo mod me up
    • (Score: 2) by Reziac on Thursday September 14 2017, @02:10AM

      by Reziac (2489) on Thursday September 14 2017, @02:10AM (#567577) Homepage

      I hadn't heard about that particular fail (but I never bought IBM HDs so didn't pay close attention) but I knew someone who had one fail, opened it up, and found the platter broken in half. Not dropped or shocked -- this was just from normal operation in a desktop case.

      --
      And there is no Alkibiades to come back and save us from ourselves.
    • (Score: 2, Interesting) by nwf on Saturday September 16 2017, @01:23AM

      by nwf (1469) on Saturday September 16 2017, @01:23AM (#568785)

      HP used to make hard drives with glass platters. We never had one fail like that. I took apart dozens of them to sanitize the data, and none had any obvious problems. They seemed quite reliable. These were like 36 GB (maybe less, hard to recall.) We just dumped the last batch we had, in fact.

  • (Score: 0) by Anonymous Coward on Wednesday September 13 2017, @06:40PM (18 children)

    by Anonymous Coward on Wednesday September 13 2017, @06:40PM (#567374)

    Wouldn't it make hard drives even more brittle therefore more vulnerable to shocks?

    • (Score: 0) by Anonymous Coward on Wednesday September 13 2017, @06:44PM (16 children)

      by Anonymous Coward on Wednesday September 13 2017, @06:44PM (#567379)

      These will never survive mail order from NewEgg.
      Does any anyone still buy from them? (Curious)

      • (Score: 2) by takyon on Wednesday September 13 2017, @06:48PM (13 children)

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday September 13 2017, @06:48PM (#567381) Journal

        The article speculates that these would be primarily used by data centers.

        ... so you would have to use neweggbusiness.com instead heheh.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by Reziac on Thursday September 14 2017, @02:17AM (12 children)

          by Reziac (2489) on Thursday September 14 2017, @02:17AM (#567580) Homepage

          Yeah, just like those multi-terabyte HDs are all being used by datacenters right now!

          --
          And there is no Alkibiades to come back and save us from ourselves.
          • (Score: 2) by takyon on Thursday September 14 2017, @02:33AM (11 children)

            by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday September 14 2017, @02:33AM (#567587) Journal

            I don't know what you're getting at, but the primary customers for today's helium-filled 10-12 TB (and soon 14 TB) hard disk drives are data centers. And they are priced accordingly.

            http://www.anandtech.com/show/9955/seagate-unveils-10-tb-heliumfilled-hard-disk-drive [anandtech.com]
            https://www.wdc.com/about-wd/newsroom/press-room/2016-12-06-western-digital-introduces-advanced-devices-to-manage-evolving-data-center-application-demands.html [wdc.com]

            If you are trying to make a point about NAND/SSDs, it's still too early for most to switch to all SSDs. Cheap and dense QLC 3D NAND could help change that, but there is still a place for spinning rust right now.

            --
            [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
            • (Score: 2) by Reziac on Thursday September 14 2017, @02:38AM (4 children)

              by Reziac (2489) on Thursday September 14 2017, @02:38AM (#567591) Homepage

              No; referring to all the mutli-TB drives suddenly on the consumer market. I remember when they were all "datacenter drives" too.

              --
              And there is no Alkibiades to come back and save us from ourselves.
              • (Score: 2) by takyon on Thursday September 14 2017, @02:47AM (3 children)

                by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday September 14 2017, @02:47AM (#567598) Journal

                I'll be really impressed when I see a helium-filled consumer HDD. I don't think there has been a helium-filled drive intended (and priced) for consumers to date, but it's entirely possible that we will see that (helium allows you to increase platter density/count).

                AFAIK all of the capacity points above 8 TB are paired with helium.

                With WD, Seagate, and what's left of Toshiba all offering helium-filled drives, one of them should be able to make the jump to consumers. And I think some consumers would buy it.

                --
                [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
                • (Score: 2) by Reziac on Thursday September 14 2017, @04:45AM (2 children)

                  by Reziac (2489) on Thursday September 14 2017, @04:45AM (#567641) Homepage

                  I just saw a 10TB offered ... ah, hell, I don't remember which retailer but one of the consumer outlets. I rather doubt consumer and enterprise are different under the hood, at that level.

                  And my first thought was... considering the *gasp* price of tape libraries, how exactly are we supposed to back up that much data without growing our own server farms??

                  --
                  And there is no Alkibiades to come back and save us from ourselves.
            • (Score: 2) by Azuma Hazuki on Thursday September 14 2017, @03:55AM (5 children)

              by Azuma Hazuki (5086) on Thursday September 14 2017, @03:55AM (#567623) Journal

              I don't trust QLC as far as I can throw it until there's some good, repeatable benchmarks out there for the stuff. Bear in mind for nLC SSDs there are 2^n values each cell has to store.

              --
              I am "that girl" your mother warned you about...
              • (Score: 2) by takyon on Thursday September 14 2017, @04:43AM (3 children)

                by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday September 14 2017, @04:43AM (#567640) Journal

                If it doesn't reach a certain level of endurance, it simply won't be adopted by enterprise customers. They are going to be the first ones to get their hands on QLC NAND, not you. And they have more demanding workloads (full drive writes) than typical consumers. Facebook is particularly thirsty for QLC NAND [tomshardware.com] (alt [techtarget.com], note: "We asked for QLC flash a few years ago, and we ask for it again").

                Toshiba has suggested that QLC could have a similar endurance to TLC [soylentnews.org]. A full 1,000 write cycles instead of the 100 write cycles that had been predicted. That sounds unbelievable, but then again we've heard of labs working on methods to try to get endurance something like a million times better [ieee.org]. Or maybe the endurance problem doesn't scale exponentially like the 2^n states do, and 1,000 write cycle QLC is believable.

                If you meant speed, most consumer PCs don't really need super high transfer speeds beyond 500 MB/s. It was the increased random IOPs that really made the difference for people switching from HDDs to SSDs. Even a crappy SSD should be able to do 2-3 orders of magnitude better than an HDD on random IOPS.

                There are also tricks to make the QLC SSD perform better, especially in consumer or enthusiast drives meant to hit higher speeds/IOPs. For example, QLC could emulate SLC/MLC (using 4-8 of 16 states to pretend to be 1-2 bits per cell). Or you can include SLC/MLC as a cache, DRAM as a cache, Intel 3D XPoint as a cache, etc. You can do extreme overprovisioning to help boost speed and endurance since QLC drives are going to have high capacities.

                When QLC SSDs do hit the consumer market, review sites will be all over them on day 1 if not before. So unless you preordered your SSD, you should at least have an idea of what you are getting.

                --
                [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
                • (Score: 2) by Azuma Hazuki on Thursday September 14 2017, @05:11AM (2 children)

                  by Azuma Hazuki (5086) on Thursday September 14 2017, @05:11AM (#567648) Journal

                  Where *is* 3D XPoint, by the way? I remember them wittering on about how it was going to eat SSDs alive, but all that's out there isn't even bootable. Just some Optane cards that are basically the SSD portion of "hybrid" SSHDs on massive amounts of 'roids.

                  When they can make a 10 TB hunk of 3D XPoint at a reasonable (well, reasonable by datacenter standards...) pricing, *then* we'll see some fireworks.

                  --
                  I am "that girl" your mother warned you about...
                  • (Score: 3, Informative) by takyon on Thursday September 14 2017, @05:46AM (1 child)

                    by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday September 14 2017, @05:46AM (#567660) Journal

                    3D XPoint [wikipedia.org] a post-NAND technology [theregister.co.uk] (also referred to as storage class memory [tomshardware.com]) along the lines of Crossbar's RRAM or HP's Memristors. It occupies a memory/storage tier in between DRAM and NAND. It is denser than DRAM, cheaper than DRAM, slower than DRAM, and non-volatile unlike DRAM. It is less dense than NAND, more expensive than NAND, and anywhere from somewhat faster than NAND to order(s) of magnitude faster, highly dependent on the type of workload. This article explains it pretty well [arstechnica.com].

                    Now you have had several companies working on post-NAND for years. Crossbar [wikipedia.org] has been hyping [theregister.co.uk] their shit for years. They have promised all around better specs than NAND and the ability to store multiple terabytes in a postage stamp-like form factor [theregister.co.uk].

                    So Crossbar, HP, Micron, IBM, Crocus Technology, Unity Semiconductor, Samsung, Toshiba, Spin Transfer Technologies, and other companies have all FAILED to get a legitimate post-NAND technology onto the market outside of some megabyte-sized cache products. Intel comes along with their concept and has apparently rushed it to market (although it has been delayed somewhat). It's in the post-NAND category, but it's not a true replacement for NAND since it doesn't try to compete with NAND on capacity/density and cost per bit. Intel is definitely not giving us terabytes of this stuff in postage stamp size anytime soon.

                    I will say this much for XPoint. They have 16 and 32 GB modules at reasonable prices [soylentnews.org]. They can be bought today - the 32 GB one is around $75-$80. So if your motherboard happens to have an M.2 slot, and you wanted to try this out as a kind of non-volatile boot drive, it would seem like you can do it fairly cheaply. Here are benchmarks [legitreviews.com]. DRAM prices have increased in the last 2 quarters, btw.

                    --
                    [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
                    • (Score: 2) by Azuma Hazuki on Thursday September 14 2017, @04:11PM

                      by Azuma Hazuki (5086) on Thursday September 14 2017, @04:11PM (#567870) Journal

                      LOL, riiiight after posting my last post I went "hmm, maybe I should Google this..." and found all that information. It's odd how the 16 and 32GB devices have worse sequential write than an NVMe NAND SSD, but the read speed looks good. And the 4K, low-queue-depth reads especially. I have only a 20GB root partition, so this would be viable for a Linux boot drive.

                      --
                      I am "that girl" your mother warned you about...
              • (Score: 2) by takyon on Thursday September 14 2017, @04:57AM

                by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday September 14 2017, @04:57AM (#567643) Journal

                About the consumer drives, capacity/$ and higher capacity is the draw rather than speed. I think 20-30% reduction in cost per bit has been typical for each new generation of 3D NAND. I don't know if anybody has estimated a % for the first upcoming QLC SSD products yet. Obviously, you get 33% more capacity over the equivalent TLC NAND, but that could be offset somewhat by higher manufacturing costs.

                Speculation: What if a move to QLC (or 8 bits-per-cell [theregister.co.uk] in the future) results in a slight performance boost simply from making cells more similar to bytes? In QLC, 1 byte is stored in 2 cells. For TLC... it's 2 and 2/3 cells?

                --
                [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2) by takyon on Wednesday September 13 2017, @07:03PM

        by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday September 13 2017, @07:03PM (#567388) Journal

        As to your question, I haven't bought from Newegg in the last year but I have had pretty good experiences with it. Rather than searching on their site, I would check SlickDeals [slickdeals.net] first.

        --
        [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 2, Interesting) by Anonymous Coward on Wednesday September 13 2017, @07:14PM

        by Anonymous Coward on Wednesday September 13 2017, @07:14PM (#567396)

        Microcenter's price is competitive with all the online shops, or better when on sale. If there is one near you, you can go and pick it up right away. Also, if you run into defective units, replacement is quick and easy.

    • (Score: 2) by DannyB on Wednesday September 13 2017, @08:54PM

      by DannyB (5839) Subscriber Badge on Wednesday September 13 2017, @08:54PM (#567454) Journal

      Wouldn't it make hard drives even more brittle therefore more vulnerable to shocks?

      A solution: instead of building 12 platters into a hard drive, build 12 platters into an SSD.

      Also find a way to make the SSD not induce vibration into the computer's case.

      --
      Every performance optimization is a grate wait lifted from my shoulders.
  • (Score: 5, Funny) by bob_super on Wednesday September 13 2017, @06:41PM (1 child)

    by bob_super (1357) on Wednesday September 13 2017, @06:41PM (#567376)

    - Apple releases Crystal Hard Drive, $5000/GB
      - Audiophiles fight for a decade to know whether those glass drives enable better sound quality
      - Geeks try to make actual music by shaving the drive platters and moistening the drive heads
      - web commenters finally try to find a replacement for "spinning rust", despite Al having been the material for all those years.
      - Pedants and trolls still argue that the old tech was Aluminium anyway!

    • (Score: 2, Informative) by Anonymous Coward on Wednesday September 13 2017, @09:23PM

      by Anonymous Coward on Wednesday September 13 2017, @09:23PM (#567476)

      "Rust" doesn't refer to the substrate, but to the coating of iron oxide [wikimedia.org] that was formerly used.

      The Toshiba MK-1122FC [computerhistory.org] had a glass substrate. It began shipping in 1991 (same year the Web started).

  • (Score: 2, Disagree) by fnj on Wednesday September 13 2017, @07:02PM (4 children)

    by fnj (1654) on Wednesday September 13 2017, @07:02PM (#567386)

    So fucking what? Yawn. We don't need a 1.5x improvement in density. We need 10x or 100x. Wake me up when you come up with something worth more than a picosecond flicker of mild interest.

    And can you IMAGINE how freakin fragile this thing full of glass would be?

    • (Score: 2) by takyon on Wednesday September 13 2017, @07:05PM

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday September 13 2017, @07:05PM (#567390) Journal

      Never ever ever ever ever ever ever shake a baby.

      Or a hard drive?

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
    • (Score: 2, Informative) by Anonymous Coward on Wednesday September 13 2017, @07:39PM (1 child)

      by Anonymous Coward on Wednesday September 13 2017, @07:39PM (#567412)

      Um, you realize the 2.5 HDD in your laptop has glass platters, right?

      • (Score: 1) by nwf on Saturday September 16 2017, @01:26AM

        by nwf (1469) on Saturday September 16 2017, @01:26AM (#568787)

        Some may be, but most of the ones I've taken apart are aluminum or something similar. Old 3.5 ones were sometimes glass, but even that was rare.

    • (Score: 2, Insightful) by Anonymous Coward on Wednesday September 13 2017, @08:07PM

      by Anonymous Coward on Wednesday September 13 2017, @08:07PM (#567434)

      That attitude is like saying you won't take a 50% pay increase because you're going to get rich by winning the lottery instead.

  • (Score: 3, Funny) by inertnet on Wednesday September 13 2017, @07:57PM (8 children)

    by inertnet (4071) on Wednesday September 13 2017, @07:57PM (#567428) Journal

    With every glass I lose more memory.

    • (Score: 2) by DannyB on Wednesday September 13 2017, @08:56PM (3 children)

      by DannyB (5839) Subscriber Badge on Wednesday September 13 2017, @08:56PM (#567456) Journal

      Memory upgrades are reasonably inexpensive. And it's easy to have 32 GB or 64 GB these days, without losing much of it.

      --
      Every performance optimization is a grate wait lifted from my shoulders.
      • (Score: 1) by Pax on Wednesday September 13 2017, @10:02PM (2 children)

        by Pax (5056) on Wednesday September 13 2017, @10:02PM (#567498)

        where have you been? RAM prices have risen sharly.. last year i was able to get 32GB for about 130 for cheap shit to the £146.97 GBP Corsair Vengence 3000Mhx blue LED(on offer) I got..NOT any more pal the 16GB kit is http://www.ebuyer.com/766191-corsair-vengeance-blue-led-16gb-ddr4-3000mhz-memory-kit-cmu16gx4m2c3000c15b-cmu16gx4m2c3000c15b [ebuyer.com] 173.99GBP

        32 gig kits of other colours are 298.99 to 309.99 GBP clicky [ebuyer.com]
        even taking into account the 40 GBP off I got that's quite a price hike over the past year

        • (Score: 4, Informative) by takyon on Wednesday September 13 2017, @10:14PM

          by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday September 13 2017, @10:14PM (#567504) Journal

          Here's some sources in case somebody doesn't believe you:

          https://www.theregister.co.uk/2017/05/18/dram_bonanza_for_top_three_suppliers/ [theregister.co.uk]

          Global DRAM shortages might have proved a pain in the butt for buyers of PCs, smartphones and servers, but – unsurprisingly – they were a boon for the memory manufacturers.

          Sales of the component hit a record level of $14.1bn in the first three months of 2017, up 13.4 per cent year-on-year, according to the latest numbers from Trendforce's DRAMeXchange division.

          And the data indicates at least a 30 per cent increase in the average contract price of PC DRAM modules between the fourth quarter of 2016 and the first quarter of 2017.

          http://www.eetimes.com/document.asp?doc_id=1331796 [eetimes.com]

          https://epsnews.com/2017/08/18/dram-prices-continue-climb/ [epsnews.com]

          DRAM buyers continued to face tight supply and rising prices in the second quarter of 2017 with no relief in sight in the second half of the year. Pricing for DRAMs is expected to remain on an upward trend, while production capacity expansion will be limited in the second half of 2017. Suppliers are expected to adjust their product mixes based on margins, according to market analysts.

          http://www.anandtech.com/show/11724/samsung-sk-hynix-graphics-memory-prices-increase-over-30-percent [anandtech.com]

          In the midst of a global DRAM shortage, Digitimes reports that the market prices for graphics memory from Samsung and SK Hynix have increased by over 30% for August. This latest jump in memory prices is apparently due to the pair of DRAM manufacturers repurposing part of their VRAM production capacities for server and smartphone memories instead. As Digitimes’ sources report, this VRAM pricing is expected to increase further in September, impacting graphics card and gaming notebook manufacturers. Consumers have already felt the pain through skyrocketing DDR4 prices, and TrendForce/DRAMeXchange expects the upward trend of PC DRAM chips to continue to 2018.

          http://www.dramexchange.com/ [dramexchange.com]

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
        • (Score: 2) by DannyB on Thursday September 14 2017, @04:27PM

          by DannyB (5839) Subscriber Badge on Thursday September 14 2017, @04:27PM (#567893) Journal

          I guess I think of cheap in a different way.

          If adding 32 GB of additional memory for about $300 gets me to market six months ahead of my competitor, it's cheap. Same with throwing more and more cpu cores at it. It's cheap.

          That, and the fact that it's not my $300 I'm spending when asking for such an upgrade.

          --
          Every performance optimization is a grate wait lifted from my shoulders.
    • (Score: 0) by Anonymous Coward on Wednesday September 13 2017, @09:00PM

      by Anonymous Coward on Wednesday September 13 2017, @09:00PM (#567460)

      *hic* just one more

    • (Score: 2) by c0lo on Wednesday September 13 2017, @10:58PM (2 children)

      by c0lo (156) Subscriber Badge on Wednesday September 13 2017, @10:58PM (#567518) Journal

      That's not the glasses' fault, it is caused but what you pour in the glass.
      Anyway, resorting to the use of filled glasses is indicative you want to escape from some (possibly traumatic) memories - so I'd say it works as intended.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
      • (Score: 3, Funny) by Azuma Hazuki on Thursday September 14 2017, @03:57AM (1 child)

        by Azuma Hazuki (5086) on Thursday September 14 2017, @03:57AM (#567626) Journal

        You could say he's getting row-hammered? :D

        --
        I am "that girl" your mother warned you about...
        • (Score: 3, Funny) by c0lo on Thursday September 14 2017, @04:58AM

          by c0lo (156) Subscriber Badge on Thursday September 14 2017, @04:58AM (#567644) Journal

          Yeah, to the point on which he became inert.

          --
          https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 0) by Anonymous Coward on Wednesday September 13 2017, @08:30PM

    by Anonymous Coward on Wednesday September 13 2017, @08:30PM (#567445)

    Some drives have ceramic platters [youtube.com].

  • (Score: 2) by FatPhil on Wednesday September 13 2017, @08:51PM (5 children)

    by FatPhil (863) <reversethis-{if.fdsa} {ta} {tnelyos-cp}> on Wednesday September 13 2017, @08:51PM (#567453) Homepage
    That's 1 better than what we had in the 70s - an 9% improvement over 4 1/2 decades - .2% per year - yayyyy!!

    https://en.wikipedia.org/wiki/File:DysanRemovableDiskPack.agr.jpg
    --
    Great minds discuss ideas; average minds discuss events; small minds discuss people; the smallest discuss themselves
    • (Score: 2) by takyon on Wednesday September 13 2017, @09:01PM (4 children)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday September 13 2017, @09:01PM (#567464) Journal

      Tempted to throw 3.5" in the headline.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 0) by Anonymous Coward on Wednesday September 13 2017, @09:27PM (3 children)

        by Anonymous Coward on Wednesday September 13 2017, @09:27PM (#567478)

        Is there enough space for it?

        • (Score: 2) by takyon on Wednesday September 13 2017, @09:30PM (2 children)

          by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Wednesday September 13 2017, @09:30PM (#567481) Journal

          We've switched to glass headlines, but they shatter as soon as someone complains.

          --
          [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
          • (Score: 1, Informative) by Anonymous Coward on Thursday September 14 2017, @12:21AM (1 child)

            by Anonymous Coward on Thursday September 14 2017, @12:21AM (#567542)

            > We've switched to glass headlines, but they shatter as soon as someone complains.

            We all know who it is. *shakes fist at wonkey_monkey

            • (Score: 2) by maxwell demon on Thursday September 14 2017, @06:09AM

              by maxwell demon (1608) on Thursday September 14 2017, @06:09AM (#567664) Journal

              We all know who it is. *shakes fist at wonkey_monkey

              I see, it's your fault: The glass shatters from your fist.

              --
              The Tao of math: The numbers you can count are not the real numbers.
  • (Score: 2) by Snotnose on Wednesday September 13 2017, @10:16PM (2 children)

    by Snotnose (1623) on Wednesday September 13 2017, @10:16PM (#567505)

    Every year I got a new lunchbox with a thermos. The thermos was a glass bottle in a metallic tube. Every year that glass broke within a month, no matter how careful I was with it. This was the 60s, I'm old enough to wish I'd kept all my old lunchboxes so I could get rich on Pawn Stars.

    Losing the ability to keep cold things cold til lunch is one thing, but losing terabytes of data cuz who the hell knows, ya, I'll be staying away from these things.

    --
    When the dust settled America realized it was saved by a porn star.
    • (Score: 2) by c0lo on Wednesday September 13 2017, @11:09PM

      by c0lo (156) Subscriber Badge on Wednesday September 13 2017, @11:09PM (#567522) Journal

      Every year that glass broke within a month, no matter how careful I was with it. This was the 60s,

      Who would have thought? 50 years later we have flexible Gorilla glass [youtu.be]

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
    • (Score: 1) by nwf on Saturday September 16 2017, @01:28AM

      by nwf (1469) on Saturday September 16 2017, @01:28AM (#568789)

      Maybe you got cheap ones. I still have some from the 70s that are just fine. The plastic on the outsize has yelled, but they are otherwise fine.

  • (Score: 2) by epitaxial on Thursday September 14 2017, @02:04AM (3 children)

    by epitaxial (3165) on Thursday September 14 2017, @02:04AM (#567576)

    The price per gigabyte has been flat for years now. I need more space but 8TB drives are finally coming in at under $200. I'm not buying any Seagate garbage either, the extra money for HGST is worth it.

    • (Score: 3, Interesting) by Reziac on Thursday September 14 2017, @02:22AM

      by Reziac (2489) on Thursday September 14 2017, @02:22AM (#567582) Homepage

      Gee, I can't imagine why...

      https://www.backblaze.com/blog/hard-drive-failure-stats-q2-2017/ [backblaze.com]

      --
      And there is no Alkibiades to come back and save us from ourselves.
    • (Score: 2) by takyon on Thursday September 14 2017, @03:13AM (1 child)

      by takyon (881) <reversethis-{gro ... s} {ta} {noykat}> on Thursday September 14 2017, @03:13AM (#567605) Journal

      Looks like 8 TB has dropped down to $160-170 several times (both Seagate and WD/HGST):

      https://slickdeals.net/newsearch.php?src=SearchBarV2&q=8tb&searcharea=deals&searchin=first [slickdeals.net]

      The price per GB (or TB) has been incredibly flat compared to previous decades. 2 cents per GB if we go by $160/8TB. I believe it was about 3.5 cents per GB ($70/2TB) before the Thai floods, 3 cents per GB ($100/3TB) around 2013 or so, and 2.5 cents per GB ($100/4TB or $200/8TB) around 2015 or so. Those are consumer sale prices rather than whatever Backblaze [backblaze.com] was paying.

      The longer it takes for HAMR or bit-patterned media or other HDD advances to be deployed, the more likely it will be for SSDs to destroy HDDs. And it will happen without much fuss since both Western Digital and Seagate have diversified into SSDs with big acquisitions. We might get a bad situation where consumer/enterprise HDD development grinds to a complete halt several years before NAND actually reaches the same $/TB. Meaning people like you who need bulk data storage will be SOL.

      I want to believe in a world where we pay 1 cent or less per GB ($100 for 10 TB). We won't get there without the long-delayed HAMR [wikipedia.org]-ing.

      --
      [SIG] 10/28/2017: Soylent Upgrade v14 [soylentnews.org]
      • (Score: 1, Interesting) by Anonymous Coward on Thursday September 14 2017, @03:45AM

        by Anonymous Coward on Thursday September 14 2017, @03:45AM (#567617)

        > I want to believe in a world where we pay 1 cent or less per GB ($100 for 10 TB).

        History lesson -- a college friend started a small company, must have been late 1970s. He bought DRAM from Intel in modest quantities and resold it in small quantities to early computer hobbyists. The name of the company was Centabyte -- he was selling DRAM for $0.01 per byte (8 bits, no error correcting), and that was a very good price at the time. His competition might have been surplus/obsolete core memory found at computer junk shops.

  • (Score: 0) by Anonymous Coward on Thursday September 14 2017, @03:16AM

    by Anonymous Coward on Thursday September 14 2017, @03:16AM (#567607)

    The glass they are experimenting with is 0.015 inches thick, "15 thou" to a machinist. And it's in a 3.5" drive, already in "inch dimensions". I hate conversions that give the dimension more precision than actually exists -- the mm dimension makes it look like the glass thickness is controlled to something on the order of 0.001mm, which probably isn't necessary in this application (and may add a lot of cost if it was ever specified).

    Ref. from tfa,
    > Hoya has already managed the creation of substrates as thin as 0.381mm, which is close to half the thickness of existing high-density drives.

(1)