Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Monday April 29, @02:49PM   Printer-friendly
from the longer-if-you-leave-them-in-their-box dept.

Arthur T Knackerbracket has processed the following story:

Pinch of salt warning - These are Seagate's claims not independently verified.

As Seagate ramps up shipments of its new heat assisted magnetic recording (HAMR)-based Mozaic 3+ hard drive platform, the company is both in the enviable position of shipping the first major new hard drive technology in a decade, and the much less enviable position of proving the reliability of the first major new hard drive technology in a decade. Due to HAMR's use of temporal heating with its platters, as well as all-new read/write heads, HAMR introduces multiple new changes at once that have raise questions about how reliable the technology will be. Looking to address these matters (and further promote their HAMR drives), Seagate has published a fresh blog post outlining the company's R&D efforts, and why the company expects their HAMR drives to last several years – as long or longer than current PMR hard drives.

According to the company, the reliability of Mozaic 3+ drives on par with traditional drives relying on perpendicular magnetic recording (PMR), the company says. In fact, components of HAMR HDDs have demonstrated a 50% increase in reliability over the past two years. Seagate says that Mozaic 3+ drives boast impressive durability metrics: their read/write heads have demonstrated capacity to handle over 3.2 petabytes of data transfer over 6,000 hours of operation, which exceeds data transfers of typical nearline hard drives by 20 times. Accordingly, Seagate is rating these drives for a mean time between failure (MTBF) 2.5 million hours, which is in-line with PMR-based drives.

Based on their field stress tests, involving over 500,000 Mozaic 3+ drives, Seagate says that the heads of Mozaic 3+ drives will last over seven years, surpassing the typical lifespan of current PMR-based drives. Generally, customers anticipate that modern PMR drives will last between four and five years with average usage, so these drives would exceed current expectations.

Altogether, Seagate is continuing aim for a seamless transition from PMR to HAMR drives in customer systems. That means ensuring that these new drives can fit into existing data center infrastructures without requiring any changes to enterprise specifications, warranty conditions, or form factors.


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by hazelnut on Monday April 29, @02:54PM (21 children)

    by hazelnut (30444) on Monday April 29, @02:54PM (#1355010)

    > why the company expects their HAMR drives to last several years – as long or longer than current PMR hard drives.

    I have drives that are over 10 years old and still reliable. I buy HDDs for long term. They need to last several decades (yes I have some from the 1990s that are still going).

    To avoid the question of "why". Because the first computer I ever bought myself is still working. It run an old version of Linux now because I can.

    • (Score: 2) by Unixnut on Monday April 29, @03:14PM

      by Unixnut (5779) on Monday April 29, @03:14PM (#1355012)

      I was going to write something similar. While I have drives from the 90s I can no longer find uses for, I still have drives from the 2000's that are going strong in systems (usually as a root partition).

      In fact most of the drives I have are between 10 and 20 years old. The only newer ones I have are in my storage array (as the last ones ran out of capacity) and even those are a few years old now.

      So a drive "lasting several years" is not exactly a positive endorsement for me, especially as "several" is so undefined. It could be anything from 3 to 7 years in my mind.

    • (Score: 3, Interesting) by zocalo on Monday April 29, @03:33PM (2 children)

      by zocalo (302) on Monday April 29, @03:33PM (#1355020)
      From TFS, it seems like they're specifically talking about the drive heads in connection with the 7-year figure, however, MTBF for the entire drive is given as 2.5mil hours, which is over 285 *years*, and that must presumably include the heads as part of the drive, so what gives? Assuming the figures are correct, the only way I can square that circle is that the 7-year figure relates to continues R/W operations for the heads, and if you assume a more typical workload then you're going to get the MTBF figure. Or maybe they're just playing the "lies, damn lies, and statistics" game.



      That does match my experience with drives though; they're generally good for over 10 years (I also have drives still going strong accordining to SMART stats that are in that ballpark), but if you put them under heavier workloads, e.g. in a busy SAN, then they'll generally fail much sooner. Thinking about it, when I've seen drives fail, they usually spin up OK, often don't have all that many bad sectors, yet starting having a lot R/W errors - which would make sense if the heads are failing. Backblaze is pretty good at outputting stats on the reliability of their drives and how they fail, so may be worth looking through some some of their old reliability reports and see if there's any correlation with that.
      --
      UNIX? They're not even circumcised! Savages!
      • (Score: 2) by JoeMerchant on Monday April 29, @05:13PM

        by JoeMerchant (3937) on Monday April 29, @05:13PM (#1355052)

        >the 7-year figure, however, MTBF for the entire drive is given as 2.5mil hours, which is over 285 *years*

        Figures don't lie, but liars figure. How can you tell when a Statistician is hiding something from you? Any time they're not sharing all the raw data they are, by definition, hiding something from you.

        --
        🌻🌻 [google.com]
      • (Score: 3, Touché) by Whoever on Monday April 29, @09:38PM

        by Whoever (4524) on Monday April 29, @09:38PM (#1355125) Journal

        however, MTBF for the entire drive is given as 2.5mil hours, which is over 285 *years*, and that must presumably include the heads as part of the drive, so what gives?

        What gives is that you don't understand what MTBF means.

    • (Score: 2) by mcgrew on Monday April 29, @03:58PM (6 children)

      by mcgrew (701) <publish@mcgrewbooks.com> on Monday April 29, @03:58PM (#1355029) Homepage Journal

      I have drives that are over 10 years old and still reliable.

      I'll bet they're not Western Digital, although Seagate drives have usually lasted me longer than the storage space was relevant. I remember when a 40 MB drive was HUGE.

      --
      Poe's Law [nooze.org] has nothing to do with Edgar Allen Poetry
      • (Score: 3, Interesting) by drussell on Monday April 29, @07:05PM (5 children)

        by drussell (2678) on Monday April 29, @07:05PM (#1355076) Journal

        I'll bet they're not Western Digital, although Seagate drives have usually lasted me longer than the storage space was relevant. I remember when a 40 MB drive was HUGE.

        That really depends on the drive... what series / model it was, variously, over the years...

        These two are both still spinning along happily in my main mail server box here:

        Seagate Barracuda 9 - ST19171WC (9.1 GB SCSI, 7200 RPM, 10-platters / 20-heads, server-grade, 5 year warranty)

        Device: SEAGATE SX19171W Version: 9D32
        Serial number: LA504199
        User Capacity: 8,891,556 bytes
        SMART Health Status: OK
        Elements in grown defect list: 8
        Vendor (Seagate) cache information
            Blocks sent to initiator = 1147450885
            Blocks received from initiator = 2180193344
            Blocks read from cache and sent to initiator = 121094311
            Number of read and write commands whose size segment size = 74334
        Vendor (Seagate/Hitachi) factory information
            Number of hours powered up = 189687.45
            Gigabytes Processed - read: 3911565.072
            Gigabytes Processed - write: 3315.282

        189687.45 / 24 / 365.24 = 21.64 years and counting...

        Western Digital Caviar SE Family - WD800JB (80 GB IDE, 7200 RPM, 2-platters / 4-heads, consumer-grade, 3 year warranty)

        Device Model: WDC WD800JB-00FMA0
        Serial Number: WD-WMAJ93692738
        Firmware Version: 13.03G13
        User Capacity: 80,026,361,856 bytes
        ATA Version is: 6
        SMART overall-health self-assessment test result: PASSED
        Raw_Read_Error_Rate: 0
        Reallocated_Sector_Ct: 0
        Seek_Error_Rate: 0
        Power_On_Hours: 154757
        Reallocated_Event_Count: 0
        Offline_Uncorrectable: 0
        UDMA_CRC_Error_Count: 0
        Multi_Zone_Error_Rate: 0

        154757 / 24 / 365.24 = 17.65 years and counting...

        • (Score: 2) by drussell on Monday April 29, @07:09PM (3 children)

          by drussell (2678) on Monday April 29, @07:09PM (#1355080) Journal

          Hmm, slash mangled it even though I used & lt and & gt?

              Number of read and write commands whose size <= segment size = 651690748
              Number of read and write commands whose size > segment size = 74334

            Number of read and write commands whose size <= segment size = 651690748
            Number of read and write commands whose size > segment size = 74334

          • (Score: 4, Interesting) by drussell on Monday April 29, @07:11PM (2 children)

            by drussell (2678) on Monday April 29, @07:11PM (#1355084) Journal

            Ahhh.. It converts it back to a < or > after the preview, instead of retaining the & code...

            That's a bug. :)

            • (Score: 1) by pTamok on Tuesday April 30, @02:55PM (1 child)

              by pTamok (3042) on Tuesday April 30, @02:55PM (#1355222)

              Yes, I have experienced the same bug.

              I use 'lookalike' unicode characters to emulate the 'greater than' and 'less than' signs if I wish to display them in text. I've posted about this in the past.

              I'm afraid I didn't file a bug, I just use the above workaround. The workarounfdhas the disadvantage that you can't cut'n'paste the displayed text into another document and expect it to work as valid html.

              • (Score: 2) by drussell on Tuesday April 30, @04:23PM

                by drussell (2678) on Tuesday April 30, @04:23PM (#1355232) Journal

                I use 'lookalike' unicode characters to emulate the 'greater than' and 'less than' signs if I wish to display them in text. I've posted about this in the past.

                Yeah, that was going to be my next thing to try. :)

                The & lt and & gt worked in the preview, so I submitted it, only to realize afterwards that the preview process had changed them back to < and >.

        • (Score: 2) by drussell on Wednesday May 01, @09:06PM

          by drussell (2678) on Wednesday May 01, @09:06PM (#1355446) Journal

          Also, this is incorrect on the Seagate:

          User Capacity: 8,891,556 bytes

          That is in Kbytes... It should have read:

          User Capacity: 9,104,953,344 bytes

    • (Score: 3, Interesting) by JoeMerchant on Monday April 29, @05:08PM

      by JoeMerchant (3937) on Monday April 29, @05:08PM (#1355050)

      I have drives that are over 10 years old and still working. Does that make them reliable? Yes but; past performance is no guarantee of future results.

      Remember also: hard drives are pretty worthless without power supplies and a computer interface - those components can not only fail themselves, but also introduce failures to the hard drive (such as: during a lightning strike.)

      I have been very happy with USB interfaced external hard drives for my "mission critical" data storage since about 2010. Whatever (Linux box) they are plugged into becomes a NAS, they are portable to all kinds of computers (though I rarely physically touch them) and as prices have come down over the years, my 2x 2TB mirrored hard drive system has added 2 more 2TB SSD mirrors, while one of the spinning platter systems did fail shortly after a lightning strike - but strangely the other one soldiers on 5 years hence.

      --
      🌻🌻 [google.com]
    • (Score: 3, Interesting) by looorg on Monday April 29, @06:16PM (5 children)

      by looorg (578) on Monday April 29, @06:16PM (#1355061)

      I used to religiously buy Seagate drives during the 90's. They worked then. Then they started to fail and the replacement drives failed so I switched. Currently they are all Intel drives and have been for about the last decade or so. They have been working fine so far. No issues. Yet.

      That said I have kept every single drive I have used since the early 90's, except one -- I included that when I sold and Amiga1200 to a friend. The rest are all boxed away. Some of them working, some being Schrödinger drives -- they might work, they might not work, they worked last time i checked and unless I check again they are still working -- even tho part of me think that even if the components have not failed the driveplates might not be wonky or stuck after decades of not spinning. Some failed but got saved due to content. Others perhaps as replacement parts, but I have ever only had to create two frankendrives (or whatever they should be referred to as) -- usually just control board swaps and components for power regulation swaps.

      The reason I just kept them all is content and also that the return policy have been really REALLY bad. I have to pay them to ship their defective drive to them, in return they offer me a refurbished drive. Adding up all the postage and such and I might as well just go and buy a new drive of a different brand.

      • (Score: 3, Informative) by turgid on Monday April 29, @07:00PM (4 children)

        by turgid (4318) Subscriber Badge on Monday April 29, @07:00PM (#1355072) Journal

        There used to be so many hard disk brands: Connor, Seagate, Western Digital, Quantum, Hitachi, IBM, Toshiba... Who remembers the Hitachi Death Star? Didn't IBM buy them? I got some second hand.

        • (Score: 3, Insightful) by Unixnut on Monday April 29, @07:32PM

          by Unixnut (5779) on Monday April 29, @07:32PM (#1355090)

          Who remembers the Hitachi Death Star? Didn't IBM buy them? I got some second hand.

          I remember them, and yes IBM did buy them. I have some IBM Death stars kicking around (20-40GB models AFAIR). They work fine as root partitions (although are noisier than other drives of the same period). However this is most definitely a case of survivorship bias. I had loads of them fail early on (the nickname was well earned).

          I never subscribed to the "brand X makes reliable drives", it varies by production batch more than anything. The Deskstar and Travelstar models however stand out as the only exception in my experience.

        • (Score: 2) by looorg on Monday April 29, @08:02PM

          by looorg (578) on Monday April 29, @08:02PM (#1355097)

          I remember Conner (i'm fairly sure it's Conner and not Connor, but I don't want to deep dive into my box of drives). I had a friend that went on a business trip to the US buy me one -- 650 megabyte! Whooo .. That disk would never get full ... then it did. Quantum was a staple for a while in the 90's -- Quantum Fireball or whatever lived up to the name. Hot, hot, hot.

          That said from memory storage manufacturers are just an endless stream of brand names and acquisitions.

          Isn't Hitatchi now owned by Western Digital? They (WD) also own SanDisk.

          Conner was started by ex-Seagate people and eventually Seagate bought them back.

          Quantum was bought by Maxtor (ex-IBM)? But is still somehow around I think. I just have not seen any product from them in eons. Maxtor was then bought by Seagate.

          So I would guess there are a lot brands, just not a lot of actual manufacturers around these days -- Western Digital, Toshiba, Seagate are probably almost all there is, then perhaps a bunch of small fish. I'm not sure which one is the largest but if I was to guess I would go Seagate - Western - Toshiba; but it's a guess.

        • (Score: 3, Informative) by krokodilerian on Tuesday April 30, @04:49AM

          by krokodilerian (6979) on Tuesday April 30, @04:49AM (#1355172)

          Actually, after the "death star" fiasco, IBM sold out their HDD business to Hitachi. Who then became one of the most reliable HDDs until recently, when WD bought them.

          Abouut 15 years ago, I ran a system with ~8500 drives, and there were noticeable differences in the failure rates between Samsung (bad), WD (less bad), HGST (almost no failures).

        • (Score: 2) by Whoever on Tuesday April 30, @02:17PM

          by Whoever (4524) on Tuesday April 30, @02:17PM (#1355215) Journal

          In the 2-4TB range, the Hitachi drives appear to be very reliable.

    • (Score: 2) by turgid on Monday April 29, @06:57PM (1 child)

      by turgid (4318) Subscriber Badge on Monday April 29, @06:57PM (#1355066) Journal

      I wish I had been able to keep the first PC I built myself back in 1996. It started out as a Pentium 100 with 32MB RAM and 256k L2 cache. It ended up as a P166MMX with 512k L2 cache and 64MB RAM. I'm sure there's a Slackware, NetBSD on Gentoo that I could still run on it. However, over that time and due to various house moves it had to go to the recycling centre. The oldest home-made PC I have is the one I built as its replacement in 1999 which is a K6-2/500 with 512MB RAM. It's about three years since I last booted it. I can't remember what it's got on it, probably Slackware. The problem with these really old machines is they predate the CMOV instruction so any i686 binary will not run on them. Thank goodness for FOSS.

      • (Score: 2) by Unixnut on Monday April 29, @07:46PM

        by Unixnut (5779) on Monday April 29, @07:46PM (#1355095)

        The problem with these really old machines is they predate the CMOV instruction so any i686 binary will not run on them. Thank goodness for FOSS.

        Ah yes, I still have some old EPIA Mini-ITX machines (EPIA 5000 and I think the M-10000). In fact I am trying to install Linux on one of them right now (going to make a WiFi access point out of it), but I can't get the thing to boot. These old boards emulate "USB-FDD" drives, which apparently no Linux installer I tried supports anymore.

        The USB-CDROM drive is packed away somewhere deep in the attic so I can't burn a boot CD. In the end it may be easier to pull the drive, install Linux via a VM, then plonk it back in the machine. I don't miss the dodgy USB booting of early boards I can tell you that.

        The EPIA boards are i586, as they had some of the i686 instructions, but not all of them (I think CMOV is one of those missing). As a lot of Linux binaries only target i686 it does cause issues, but at least with FOSS you can recompile.

        My first machine was a Pentium II, originally with 32MB of RAM which was a hand me down from distant family who bought a new PC.

        During that period the "Windows upgrade cycle" was in full swing and almost every house had a PC, so roughly every 12 months (usually around Christmas) next to the garbage cans were last years PC's (All towers and pizza boxes back then). I would carry a small tool kit with me and pull out as many components as I could. Ended up with quite a collection.

        By the end of it I maxed out my original PC with 4x20GB drives and 256MB of RAM (the highest it could take), which was enough to run Windows 2000 when it came out (I was amazed how slow it was on so much memory!).

        Plus I had enough components to build 2 more PC's, good times. Nowadays people mostly use laptops and tablets, which don't have standard components (and are often glued shut) making them nothing more than landfill unfortunately.

    • (Score: 2) by Whoever on Monday April 29, @09:40PM

      by Whoever (4524) on Monday April 29, @09:40PM (#1355126) Journal

      I have drives that are over 10 years old and still reliable.

      I have one drive that currently reports 120818 "Power on hours". That's almost 14 years.

  • (Score: 2, Informative) by pTamok on Monday April 29, @05:00PM (13 children)

    by pTamok (3042) on Monday April 29, @05:00PM (#1355048)

    Personally, I'd prefer to use non helium-filled drives, as the helium will leak eventually.

    I'm also intrigued that HAMR seems to have won out over MAMR, as I would have expected the latter to be more reliable - but I'm not a hard-drive engineer.

    As far as I am concerned, I'll use SSDs for primary storage, but would love to back up to something long-term reliable - my guess would have been MAMR with PMR and shingling: but my guess appears to be wrong.

    https://www.anandtech.com/show/11925/western-digital-stuns-storage-industry-with-mamr-breakthrough-for-nextgen-hdds/2 [anandtech.com]
    https://www.anandtech.com/show/14077/toshiba-hdd-roadmap-smr-mamr-tdmr-and-hamr [anandtech.com]

    I don't know the reason why HAMR gives higher areal density.

    • (Score: 2) by JoeMerchant on Monday April 29, @05:10PM (10 children)

      by JoeMerchant (3937) on Monday April 29, @05:10PM (#1355051)

      Helium better than hydrogen, and if the case is welded and potted properly (big if)...

      Like fine wine, if you're going to keep it for a decade or more, best to invest in a climate controlled environment for your preciousness: including air temperature, humidity, vibration, no strong light sources, EMI, ESD, etc.

      --
      🌻🌻 [google.com]
      • (Score: 3, Touché) by bloodnok on Monday April 29, @05:36PM (9 children)

        by bloodnok (2578) on Monday April 29, @05:36PM (#1355056)

        Helium better than hydrogen,

        Please explain. I thought that helium was the most "escape-prone" element as it has the smallest molecular/atomic size. Hydrogen, being usually bound into 2 linked atoms is larger and so needs larger pores or mechanical gaps to pass through.

        Enquiring minds would really like to know.

        __
        The major

        • (Score: 2) by JoeMerchant on Monday April 29, @06:30PM (8 children)

          by JoeMerchant (3937) on Monday April 29, @06:30PM (#1355063)

          Hmmm... not sure about the diffusion issue, I know H2 is highly problematic with a molecular weight of 2, He (monatomic) has a molecular weight of 4.... probably a matter of exactly how you're trying to contain it (either one).

          --
          🌻🌻 [google.com]
          • (Score: 3, Informative) by pTamok on Monday April 29, @07:26PM (7 children)

            by pTamok (3042) on Monday April 29, @07:26PM (#1355088)

            According to an unreliable source (no citations of references, and not Wikipedia), "Due to the small size of helium atoms, the diffusion rate through solids is three times greater than that of air and 65% greater than that of hydrogen." Helium diffuses through (latex) rubber quickly, which is why helium balloons are made with aluminium-lined mylar.

            Some more info in this stackexchange exchange: https://physics.stackexchange.com/questions/587050/is-there-any-way-for-a-gas-to-pass-through-a-solid-metal [stackexchange.com]

            But the issue is not diffusion through solid metal - the point is that any leak is fatal, so you are dependent on a leak-free enclosure, whereas with an air-filled enclosure, a leak is non-fatal - in fact many hard drives used to have a 'labyrinthine filter' to allow the pressure in the hard-drive casing to equalise with the external pressure.

            Hard drive manufacturers will guarantee their Helium-filled drives for 5 years.

            https://www.backblaze.com/blog/helium-filled-hard-drive-failure-rates/ [backblaze.com]
            https://blog.westerndigital.com/helium-hard-drives-explained/ [westerndigital.com]
            https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/collateral/brochure/brochure-helioseal-technology.pdf [westerndigital.com]
            https://blog.westerndigital.com/race-to-seal-helium/ [westerndigital.com]

            • (Score: 3, Informative) by JoeMerchant on Monday April 29, @07:42PM (6 children)

              by JoeMerchant (3937) on Monday April 29, @07:42PM (#1355093)

              It's relatively easy to weld an enclosure shut (Helium-tight) - what's harder is the material science to pass electrical connections through that metal case in a long-lived Helium-tight fashion. There's the metal-insulator junctions which don't age nearly as well as a welded joint.

              I worked at a place that did Nitrogen filled titanium cases with epoxy headers for in-body implantation. Just two pass-through conductors, just a seven year life, so the epoxy-titanium join was good enough, but if it was Helium (or H2) on one side vs room air on the other, with all kinds of temperature variations (which you don't get in an implanted device, at least not while the implantee is alive), that's an easier nut to crack, as it were.

              --
              🌻🌻 [google.com]
              • (Score: 1) by pTamok on Tuesday April 30, @06:09AM (5 children)

                by pTamok (3042) on Tuesday April 30, @06:09AM (#1355176)

                The last link above goes into very slightly more detail on that. It's good that your experience confirms it.

                On the one hand, I recognise the extreme engineering that goes into making high-capacity hard drives, and it is fairly astonishing that the manufacturers can be confident of giving 5-year warranties without going out of business.

                On the other hand, my cynical side recognises that being forced to replace your storage media every 5 years gives a nice income stream to the storage media manufacturers. Longevity of what you sell is not good for making lots of money in a short time. This is why I'm slightly suspicious of Helium-filled drives, because I see that as (almost deliberate) built-in obsolescence. That said, there are probably other components I don't know about it that have limited lives - bearings, capacitors, the solder on the circuit boards, so I'm just seizing on one (obvious) thing.

                I doubt it is possible to build a high-capacity hard drive that is also maintainable/repairable, so the next best thing is to make them cheaply with minimal environmental despoliation and capable of being easily recyclable.

                My quest for high-capacity reliable long-term storage continues. The sort of thing you can lose in the back of a filing cabinet for 40 years and guarantee to be able to read afterwards.

                • (Score: 3, Interesting) by JoeMerchant on Tuesday April 30, @12:03PM (2 children)

                  by JoeMerchant (3937) on Tuesday April 30, @12:03PM (#1355203)

                  >Longevity of what you sell is not good for making lots of money in a short time.

                  Actually, it's great for the first "acceptably short life of product" x2 or x3 - say the market expects your product to last 8-10 years, you'll sell what you sell and around year 10-12 you'll get some boost for "better than expected longevity." But, as the years drag on, those original customers are holding their 30 year old product and most of them aren't buying replacements. Some that do sell their old product into the used market further cannibalizing new sales.

                  I worked for a company through the 1990s while it fought against sales it had made in the 1970s. The old stuff just wouldn't die.

                  --
                  🌻🌻 [google.com]
                  • (Score: 2) by ChrisMaple on Tuesday April 30, @08:57PM (1 child)

                    by ChrisMaple (6964) on Tuesday April 30, @08:57PM (#1355254)

                    The requirement for larger storage continues. Why should they care if their decade-old 1 TB drives are still working when they need 10 TB drives? How many people would replace storage in a decade-old computer instead of replacing the computer?

                    • (Score: 2) by JoeMerchant on Tuesday April 30, @10:59PM

                      by JoeMerchant (3937) on Tuesday April 30, @10:59PM (#1355273)

                      I don't know the whole market, but once I hit 2TB I pretty much stopped valuing larger drives.

                      --
                      🌻🌻 [google.com]
                • (Score: 2) by JoeMerchant on Tuesday April 30, @12:06PM (1 child)

                  by JoeMerchant (3937) on Tuesday April 30, @12:06PM (#1355204)

                  >quest for high-capacity reliable long-term storage continues. The sort of thing you can lose in the back of a filing cabinet for 40 years and guarantee to be able to read afterwards.

                  Microfiche of QR codes?

                  More seriously, what's wrong with SSDs? I suspect USB-C (adapter cables to the next-big thing) will be available in 40 years. As I understand it, the vulnerability of SSD comes mostly from write cycles (and corrosion of the PCBAs when made too cheaply...)

                  --
                  🌻🌻 [google.com]
                  • (Score: 3, Informative) by pTamok on Tuesday April 30, @12:59PM

                    by pTamok (3042) on Tuesday April 30, @12:59PM (#1355207)

                    I have looked into this.

                    For small amounts of data, QR codes printed with acid-free ink on archive quality paper is hard to beat. For slightly larger amounts, archive-quality microfiche is very good. There are commercial companies offering it as a service [bmiimaging.com]. After that, it gets difficult. I've got some old magneto-optical disks that are difficult and expensive to read now due to the lack of players.

                    Keeping backups and moving stuff from older media to newer media 'works'. Anything electronic is going to die, especially if it uses electrolytic capacitors, or if it uses lead-free solder due to the tin-whisker problem. Tape is pretty good, but you have the problem of finding a player a few decades from now.

                    SSDs - storage is dependant on a very small amount of charge not leaking/dissipating. It's worse for multi-layer implementations. There are industry standards (I won't link now, I think they are in earlier posts), but SSDs definitely fail the 'leave in a filing cabinet for 40-years' test.

                    Archive-quality CDs and DVDs are reasonably good, if stored properly, but the capacity of DVDs for some datatsets is just too small. Will there still be players in a couple of decades?

                    Anyone who runs a digital archive of any size these days needs to ensure data is copied from old media to new media regularly. Archival storage has become an active process, not a passive one.

                    Commercial scale solutions are available, but home/soho/semi-professional small-scale solutions are thin on the ground. It's odd to think the photographic negatives and prints of photos taken by my grandparents will likely outlast the digital photos of my generation (which is why archival microfiche exists).

                    <sarcasm>Of course, I could just store stuff in the cloud and make it Somebody Else's Problem</sarcasm>.

    • (Score: 2) by turgid on Monday April 29, @07:02PM (1 child)

      by turgid (4318) Subscriber Badge on Monday April 29, @07:02PM (#1355074) Journal

      What's the write speed of SSDs like, and how's the reliability? They're still pretty expensive per unit storage compared with the spinning rust.

      • (Score: 2, Informative) by pTamok on Tuesday April 30, @03:52PM

        by pTamok (3042) on Tuesday April 30, @03:52PM (#1355229)

        I believe some SSDs have on-device RAM as a cache, so you can write pretty quickly to them until the cache fills up. Obviously, you need stable power...

        Others have a cache of Single-level Cells (SLC) which can be written to quickly, and will copy from there to Multi-level Cells (MLC) 'later'. If you fill the SLC cache, writing speed slows dramatically (graphs in howtogeek link below).

        A Single-level cell is either charged, or not. There's a 'grey zone' level of charge at around 50% where you can't reliably determine whether the cell is meant to be storing a 0 or a 1.
        A Dual-level cell can store two bits in a cell. It does this by have four possible levels of charge where the charge-level boundaries/grey-zones are at 25%, 50% and 75% respectively.
        A Triple-layer cell can store three bits in a cell. It does this by have eight possible levels of charge where the charge-level boundaries/grey-zones are at 12.5%, 25%, 37.5%, 50%, 62.5%, 75%, and 87.5% respectively. It takes longer to determine where the charge level compared to the boundaries is compared to SLC.
        A Quad-layer cell can store 4 bits in a cell. It does this by having 16 possible levels of charge, where the boundaries/grey zones are at 6.25% intervals.

        The more layers of charge you have, the slower the write speed (and read speed), and you need more error correction. Cell endurance is also reduced.

        Penta-level SSDs have been shown in prototype, hexa-level are working in the lab, and Kioxia state that they believe octa-level is a future possibility. [tomshardware.com]
        https://www.tomshardware.com/news/kioxia-demonstrates-hlc-nand-memory [tomshardware.com]
        https://blocksandfiles.com/2021/08/02/how-cool-is-that-kioxias-hexa-level-cell-flash-in-liquid-nitrogen/ [blocksandfiles.com]

        This paper (published in 2015 - do a search for the title "Data retention in MLC NAND flash memory: Characterization, optimization, and recovery") states that the number of electrons in a Multi-layer cell used to indicate the charge level can be as few as 100.

        https://users.ece.cmu.edu/~omutlu/pub/flash-memory-data-retention_hpca15.pdf [cmu.edu] (PDF)

        As flash memory process technology scales to smaller feature sizes, the capacitance of a flash cell, and the number of electrons stored on it, decreases. State-of-the-art MLC flash memory cells can only store ~100 electrons. Gaining or losing several electrons on a flash cell can significantly change the cell’s voltage level and eventually alter the state of the cell. In addition, MLC technology reduces the size of the threshold voltage window [9], i.e., the span of threshold voltage values corresponding to each logical state, in order to store more states in a single cell. This also makes the state of a cell more likely to shift due to charge loss caused by retention noise. As such, for flash memory, retention errors are one of the most important limiting factors of more aggressive process scaling and MLC technology.

        After a bit of searching, this article from 2019 goes into read and write speeds. It's worth a skim-read:

        https://www.howtogeek.com/428869/ssds-are-getting-denser-and-slower-thanks-to-qlc-flash/ [howtogeek.com]

        The red line representing the Crucial P1 operates at solid NVMe speeds, albeit a little slow compared to some of the higher-end offerings. But after about 75 GB of writes, the cache becomes full, and you can see the real speed of QLC flash. The line plummets to around 80 MB/s, slower than most hard drives for sustained writes.

        In 2024, things might have improved. This 2023 webpage shows the possible problems with QLC SSDs

        https://www.cgdirector.com/qlc-vs-tlc-ssds/ [cgdirector.com]

  • (Score: 4, Interesting) by ShovelOperator1 on Monday April 29, @08:33PM (5 children)

    by ShovelOperator1 (18058) on Monday April 29, @08:33PM (#1355104)

    My old trusty ST225 got 30MB upgrade by controller card and works even today. These times, formatting a 20MB drive to 30MB using the modulation hack was considered dangerously close to the edge and usually required thorough verification of bad sectors. However, the drive still runs well despite being over 27 years old.
    The cutting-edge server drives usually got unusable after about a decade of operation, first 5-6 years continuous then mostly continuous.
    The exceptions were typical examples of a "bathtub curve", however with quite slow final incline - many drives failed early, good ones seem to still work. And it went like this to 1990s, with SMR things became even worse.
    I see some similarities - like in floppy disks. I have floppy disks from late 1980s and they work well. However, in the late 90s, something went wrong, and even brands that were really verified and famous of reliability got worse and worse quality. The "medium of tomorrow" was the expensive CD and even more expensive USB silicon drives.
    Well, I still have CD-Rs from 1996-2000 and they read great. However, later CDs in the same price segment became rubbish. It can be seen in the coating, which is more and more transparent, edges which look like being cut using a hacksaw and in general reliability which is poor right after recording. This is still the same 700MB CD, but the medium of tomorrow is the portable SSD drive.
    If taken care of properly, the SSD may work really well, but for the role of a magnetic disk - the long-term storage, they are useless. When I got an access to over 200 solid-state drives which were stored without power for a few years, I found that most of them got their internal settings damaged! While some models could be resurrected using some strange, mostly Chinese tools (available from shady Internet forums), most of them could not be salvaged.
    So if the modern hard disk, the thing firmly embedded in the computer, lasts for only 3 years, what will be the "medium of tomorrow"? The famous "Cloud" or rather someone else's computer?
    Thanks, I'll stick to my ST225.

    What I can tell about exceptions, there are still some good flash drives, in terms of reliability. However, their cost is not 10, but 50-100x the price of a typical drive and they are usually available only for own import.

    • (Score: 3, Interesting) by drussell on Tuesday April 30, @04:19PM (4 children)

      by drussell (2678) on Tuesday April 30, @04:19PM (#1355231) Journal

      My old trusty ST225 got 30MB upgrade by controller card and works even today. These times, formatting a 20MB drive to 30MB using the modulation hack was considered dangerously close to the edge and usually required thorough verification of bad sectors. However, the drive still runs well despite being over 27 years old.

      You're lucky you had one of the ST-225s that this "trick" worked on. Yours probably actually has the head amplifier circuitry that feeds the data separator circuitry on the controller as used on the ST-238R. The HDA between the two is supposedly the same, but the circuit boards were slightly different. Back in the day, some people claimed the actual media in the HDA was of "different grade," but I believe Randy at TLSI (Tri-Logic Systems was a major hard drive repair facility down in Texas, back in the day) dispelled this myth, although if you sent them any drive for repair that required new media, they always used the highest-quality, smallest grain stuff available that was suitable for 30+ sectors per track, as used on many high-density ESDI disks.

      Seagate apparently sold quite a number of ST-225s that were actually ST-238s when demand for 225s exceeded supply and they had extra ST-238 grade drives / controllers /chips / whatever available. It is also possible that at some point they just started making them all as 238-style and then tested and binned them appropriately as some other manufacturers did, but originally, at least, the ST-225 and ST-238 were actually slightly different drives. The standard ST-225 drives were ones which generally didn't work well on a Perstor at 31 sectors/track or even 26 sectors/track on an RLL controller, although the Perstors were actually a bit more forgiving of marginal disks than most RLL controllers, although supposedly some of the Adaptec RLL controllers were pretty forgiving of weak signals compared to the WD RLL controllers.

      Many manufacturers made one disk and then essentially tested them as RLL disks. Well known examples include the Miniscribe 3650 (MFM) / 3675 (RLL) and the Mitsubishi MR535 which was sold under the same model number, both as MFM certified variants and RLL certified (at a higher price.) If the drive passed properly as RLL, they binned it as the RLL version. If it didn't pass on RLL, they tested it as MFM and if it passed, they sold it as an MFM-model drive.

      Standard MFM records at 17 sectors/track.
      Standard RLL records at 26 sectors/track.
      Perstor's proprietary RLL scheme records at 31 sectors/track.

      IIRC, the basic bitrate frequency of MFM was 10 MHz, and standard RLL 15 MHz and the disks spin at 3600 RPM. ESDI was all over the place, and some controller implementations allowed up to 28 MHz speeds, giving possible data transfer rates up to about 20 Mbit/s. The number of sectors/track intended to be recorded by the manufacturer were all over the map for ESDI, often about 30 to 34 (512-byte) sectors/track (I seem to recall 32 being common,) and other rotational speeds were also then possible, giving greater overall flexibility to the designers of the disks of the day.

      I ran a pair of Miniscribe 3650 (MFM-only certified) on my Perstor PS180-16FN controller for many years in my BBS machine in the later 1980s - early 1990s until the drive mechanisms started to wear out and I replaced them with a Fujitsu 180MB IDE which cost something like $635 CAD wholesale at the time. Storage was expensive, every additional byte was precious. I even formatted my 3650s basically all the way to the landing zone. (They were rated to have 809 "data cylinders" with the head landing zone / parking zone being at 852. Most people were pretty safe using them out to about track 820, but I used to format mine all the way in to 840 for most of their life, and at the end I think I might have even formatted it out to 850/851/852 since they were running 24/7 anyway and I hadn't usually ever parked the heads, so those tracks were still in OK condition.)

      Disks were very expensive in those days. Eeking out that extra 53-82% by running an MFM drive on an RLL controller or a Perstor was really quite cost effective, when it worked. :)

      • (Score: 2) by ShovelOperator1 on Tuesday April 30, @06:06PM (3 children)

        by ShovelOperator1 (18058) on Tuesday April 30, @06:06PM (#1355234)

        Wow, thanks for the information! These times, in my conditions, there were at least 4 common 20MB drive "models" which could be all ST225.
        First, were a "genuine" Seagate ST225 drives, sold usually as parts of the computer, or as an upgrade, by computer shops. Early could handle MFM, later could handle RLL too. There was no specific date, it was just formatting and then testing bad tracks thoroughly. If number of bad sectors grew significantly vs MFM (a small growth was quite normal), it was not a good RLL candidate, else it got a "probation" period, if it was OK until the next re-format, then it was generally OK further. The drive I am writing about is one of these. Earlier drives, when re-formatted using RLL controller, could result in bad sectors.

        And now a totally local European thing: There were also 3 imported re-branded models, with no Seagate stickers at all, but numbered like ?V1? ?V2? and V3?, or something like this. Sold discretely in computer fairs/markets, usually from strange individuals selling them in cardboard boxes with quite surprising styrofoam fillings, sold right from the car, for half of the genuine Seagate's price. Their reliability was not very good - I have one 40MB unit and it still operates, but it has over 500kB of unusable sectors. They seemed to be marginally faster than the original ST225.
        Generally:
          - V1 was formatted to 20MB using only ST225 geometry. There was something altered in electronics, and I remember having two drives with different modifications - so replacing the electronics was futile.
          - V2 - the same, but could generally withstand RLL.
          - V3 - 40MB MFM unit. I suspect it was a rejected ST251.

        • (Score: 3, Interesting) by drussell on Wednesday May 01, @05:37PM (2 children)

          by drussell (2678) on Wednesday May 01, @05:37PM (#1355407) Journal

          First, were a "genuine" Seagate ST225 drives, sold usually as parts of the computer, or as an upgrade, by computer shops. Early could handle MFM, later could handle RLL too. There was no specific date, it was just formatting and then testing bad tracks thoroughly. If number of bad sectors grew significantly vs MFM (a small growth was quite normal), it was not a good RLL candidate, else it got a "probation" period, if it was OK until the next re-format, then it was generally OK further. The drive I am writing about is one of these. Earlier drives, when re-formatted using RLL controller, could result in bad sectors.

          That tracks with my experience and recollection of the reported experiences of others at the time.

          This is why I believe your ST-225 labelled drive is actually essentially an ST-238(R). Regardless of what model the sticker says it is, if it works, and has worked flawlessly for decades, formatted RLL, for all intents and purposes it is actually an ST-238R.

          And now a totally local European thing: There were also 3 imported re-branded models, with no Seagate stickers at all, but numbered like ?V1? ?V2? and V3?, or something like this. Sold discretely in computer fairs/markets, usually from strange individuals selling them in cardboard boxes with quite surprising styrofoam fillings, sold right from the car, for half of the genuine Seagate's price.

          There are really three distinct possibilities as to what those drives you encountered actually were.

          Number one, they could be complete knock-off clones, typically made in places like Korea or China towards the end of the 20MB-era. It should be pretty obvious though, if you compare a known genuine Seagate to the clone drives, the castings are likely to be subtly different, etc. You should be able to tell if it is actually a newly-manufactured clone knock-off rather than a Seagate. I do not have any examples of these in my personal collection, but I did see some obvious clone drives of various models back in the day. It was pretty obvious they were clone fakery and had the quality to match.

          Number two, they could be factory-refurbished drives which were purchased in bulk from Seagate and then resold by various sellers. The actual HDA castings on these with be identical to actual Seagate drives, and the boards should match genuine specimens also. I've personally never seen Seagate refurbs that weren't still obviously branded Seagate and those generally have a Seagate Factory Refurbished sticker on them, however they may well have done things differently in the distant past. I find it plausible that they may possibly sold some refurbished units to bulk customers or OEMs who could essentially white-box them and handle all warranty themselves, hence the lack of Seagate markings, even though they would be actual genuine Seagate drives, just that all full testing and support is handled by the bulk buyer. You would think they would still have a proper Seagate serial number somewhere on the drive so they could be tracked as "out-of-warranty" by Seagate.

          Number three, they could be genuine Seagate drives that were repaired/rebuilt by a third party repair facility and then sold on, or even rejects from an OEM which werethen sold on by an unscrupulous seller, or whatever. Very grey-market, but still Seagate-factory origin, just unsupported by the factory, only by the seller.

          I still have at least couple dozen Seagate Barracuda ST-19171WC 9.1GB SCSI factory-refurbished drives, including at least one still-sealed, unopened 10-pack. I bought 5 or 6 boxes of them back in about 1998. The story was that they were surplus refurbs from EMC Corporation. This would make some sense, as EMC would probably be buying disks like those in bulk by the pallet load or full truckload from Seagate, and would obviously have supposedly faulty ones returned from the field. They probably negotiated a very low bulk price from Seagate, possibly with a severely reduced warranty period in exchange for lower cost, but there would still be disks returned to the factory under whatever warranty they negotiated. I think I paid all $100 per box of 10, so $10 per drive. A bargain price for SCSI storage at the time, to be sure.

          They need to go through rigorous testing after unboxing, though. My testing regime is to record all information from each drive, record the serial number, SMART info, the contents of both the primary and grown defect lists, etc. and then disable all error correction, do a factory-style LLF, including the implementation of a 17-pass pattern test (doing a full LLF each time) using Jörg Schilling's good 'ol sformat utility. This procedure actually even rebuilds the PRIMARY defect list on these drives, essentially as if they were completely re-tested from scratch at the factory. I then compare the results to the original defect lists from before testing, re-enable write caching and error correction, etc. It takes about a day to run this procedure on each one of these drives. Luckily with SCSI you easily can do many drives in parallel when deep-testing a batch like this. After sitting in the boxes for many years, I found they tended to vibrate rather fiercely when first fired up until the spindle lubricant redistributed or something (that's my guess) so I started to always pre-bake them at about 50-60°C for an hour or two, ramping slowly up and then slowly down get nowhere close to the maximum specified storage-temperature gradient of 20°C/h. I then power them up and allow them to come to temperature in the drive rack and idle for 12-24h before actually testing.

          The majority of these drives have a Seagate Factory Refurbished sticker on them, but some of them do not. Most of them have also had the Identification message in the firmware modified to say "SX19171WC" instead of "ST..." These could well be drives that were returned with no-fault found, or perhaps just had firmware updates or whatnot. Remember, these would have been used in arrays, quite possibly with advanced features like spindle synchronization enabled, etc. Some of them are definitely still flakey, and some continue to grow defects during testing or whatever, and go in the BAD-DISK pile with other failed disks. The ones that pass, generally get put in a caddy for one of my DEC Storageworks drive racks, labelled with the model and serial number and *MY* sequence number (like ST19171WC #34) and sit ready for deployment when needed. I've also sold quite a number of them over the years as rigorously tested, properly working "refurb" drives (and I still honor the full 5-year warranty myself, in house, but have never yet had to replace one) to people for uses like, for example, POS systems. They don't need high capacity or modern "speed", they just need high-reliability for their legacy system and these provide a drop-in replacement. The low density platters (20 heads for 9.1 GB) and time-tested magnetoresistive (MR) head technology mean excellent longevity; you're certainly not likely to wear out the media.

          If anyone has a use that needs any fully tested, refurbished Seagate Barracuda 9.1GB 7200 RPM SCSI disks, the "WC" 80-pin SCA variant (Single-Ended, Fast/Wide SCSI, 40 MB/s) with my 5-year warranty, hit me up. You're looking at about $125 CAD per fully tested, guaranteed, 5-year warranty, old-school disk drive. :)

          Their reliability was not very good - I have one 40MB unit and it still operates, but it has over 500kB of unusable sectors. They seemed to be marginally faster than the original ST225.
          Generally:
              - V1 was formatted to 20MB using only ST225 geometry. There was something altered in electronics, and I remember having two drives with different modifications - so replacing the electronics was futile.
              - V2 - the same, but could generally withstand RLL.
              - V3 - 40MB MFM unit. I suspect it was a rejected ST251.

          If you had anything claiming to be an ST225, 20 MB MFM, but it had a faster seek time, it was probably a fake /clone. The actual ST225 was only ever made by Seagate with a 65ms (average access time) stepper motor head actuator system. It probably doesn't sound like Seagate either then. Seagate stepper motor drives have a very distinctive "beepity-beep" sound to them.

          Now on the other hand, Seagate did also make a drive called an ST225R. Don't confuse this with the ST238R (sometimes labelled without the R, but all ST238s are RLL and rightly should be called ST238R despite what the sticker might say.) The ST238R acts just like a ST225 if you format it MFM, (and indeed, in high-reliability applications back in the day, that is exactly what you did; buy an RLL-certified disk and format it MFM). The ST225R is a completely different beast. It is an ST250R with only one disk platter (2 heads) instead of two (4 heads.) Now, I suppose it is possible that the ST225R was just an ST250R that failed testing and instead of sending it back to the re-work line and re-testing like they normally would do, were sold with two heads disabled as ST225Rs, but I doubt it. They probably only have one platter and two heads. I've never actually seen either an ST225R or ST250R in the real world, so I cannot physically check an ST225R to verify anything about them, so it is all speculation. I think the ST225R and ST250R were rated 70ms access time, IIRC, but the really strange thing is they were apparently rated for 31 sector/track RLL 2,7 operation like that used on the Perstor, rather than "normal RLL" at 26 sectors/track. I have no clue what the intended application for those was, it is very weird.

          The ST251 was a much more common 40MB-class drive, and has a somewhat faster head actuator stepper than the ST225/238R. Normal ST251 drives are rated 40ms, which is noticeably faster than my Miniscribe 3650s were (61ms) and had a much smoother, quieter "beepity beep boop" quality to the stepper motor noise than the loud, crude CLUNK CLUNK CLUNK of the Miniscribe. (A friend of mine had an ST251 back at the time, so I'm quite familiar with it and things like its "stiction" problems.) There was, however, an additional high-performance variant called the ST251-1 which, amazingly, while it was still a stepper motor drive, had a 28ms average access time! Miniscribe had the much more expensive 3053 as their high-performance 40MB drive, using a voice-coil-and-servo head actuator system for a 25ms average seek time and high reliability. I don't think Seagate ever made any half-height 5¼" MFM/RLL disks with voice coil actuators. The later full height, 5¼" ST40xx models (ST4038, ST4051, ST4053, ST4077R and ST4096) were all voice coil, 40ms - 28ms and there were some 3½" drives like the ST151 and ST157R that were 28ms voice coil, but I think that stepper version of the ST251-1 was the pinnacle of half-height 5¼" Seagate. Before the invention of "embedded servo" information, an entire surface and head needed to be dedicated to positioning information for the servo, meaning added expense for reduced capacity, but greatly improved performance and reliability.

          There was also an RLL-certified version of the ST251 called the ST277R. Like the ST225, most (at least earlier) ST251s were generally unreliable on RLL controllers or a Perstor. You needed the ST277R. Whether this was due mainly to slightly different electronics, simple testing and binning as different models or differences in media (standard "oxide" vs "plated" media, and the grain size and maximum bit density, etc.) at various times throughout their production run, we will probably never know for sure unless we can ask someone like Randy van de Loo if he's still even around. :)

          In typical Seagate fashion of having every possible variant of everything to keep all production lines full, utilizing all available supplies and tooling, there was ALSO an ST251R! The ST251R is a ST277R with only two platters instead of three, but running RLL so still approximately 40MB capacity. The ST251R is a 26 sector/track, standard RLL drive, unlike the ST250R and ST225R strangeness in the specs.

          Many of these were common drives for Sysops to run in our BBS machines, back in the day, for example:

                               Form/              Avg           ---- Sectors / Track ----
          Vendor       Model   Size    Actuator   ms  Cyls  Hds MFM 17 RLL 26  Perstor 31
          ------------ ------- ------- ---------- --- ----- --- ------ ------- ----------
          Seagate      ST225R  5¼" HH  Stepper    70  667   2   (11.1) (16.9)   20.2
          Seagate      ST225   5¼" HH  Stepper    65  615   4    20.4  (31.2)  (37.2)
          Seagate      ST238R  5¼" HH  Stepper    65  615   4   (20.4)  31.2   (37.2)
          Seagate      ST250R  5¼" HH  Stepper    70  667   4   (22.1) (33.9)   40.4
          Seagate      ST4038  5¼" FH  Voice Coil 40  733   5    30.4  (46.5)  (55.5)
          Miniscribe   3650    5¼" HH  Stepper    61  809   6    40.3  (61.6)  (73.5)
          Miniscribe   3675    5¼" HH  Stepper    61  809   6   (40.3)  61.6   (73.5)
          Mitsubishi   MR535   5¼" HH  Voice Coil 28  977   5    40.5   62.0   (73.9)
          Seagate      ST4051  5¼" FH  Voice Coil 40  977   5    40.5  (62.0)  (73.9)
          Seagate      ST251   5¼" HH  Stepper    40  820   6    40.8  (62.5)  (74.5)
          Seagate      ST251-1 5¼" HH  Stepper    28  820   6    40.8  (62.5)  (74.5)
          Seagate      ST277R  5¼" HH  Stepper    28  820   6   (40.8)  62.5   (74.5)
          Miniscribe   3053    5¼" HH  Voice Coil 25  1024  5    42.5  (65.0)  (77.5)
          Micropolis   1333A   5¼" FH  Voice Coil 28  1024  5    42.5  (65.0)  (77.5)
          Microscience HH1050  5¼" HH  Voice Coil 28  1024  5    42.5  (65.0)  (77.5)
          Microscience HH1060  5¼" HH  Voice Coil 28  1024  5   (42.5)  65.0   (77.5)
          Seagate      ST4053  5¼" FH  Voice Coil 28  1024  5    42.5  (65.0)  (77.5)
          Seagate      ST4077R 5¼" FH  Voice Coil 28  1024  5   (42.5)  65.0   (77.5)
          Micropolis   1335    5¼" FH  Voice Coil 28  1024  8    68.0  (104.0) (124.0)
          Seagate      ST4096  5¼" FH  Voice Coil 28  1024  9    76.5  (117.0) (139.5)
          Seagate      ST4144R 5¼" FH  Voice Coil 28  1024  9   (76.5)  117.0  (139.5)

          (Organized by unformatted, theoretical capacity, often reflected in the model number like ST251 = 51MB unformatted, 3650 = 50MB unformatted, ST4096 = 96MB unformatted, etc.)

          Most other Sysops laughed at my "toy" Miniscribe drives, preferring to commonly run things like ST251s, lots of ST277Rs, ST4096s and the highly-reliable Microscience HH1050 was very common with many Sysops because they were pretty much the most reliable disks you could get for heavy 24/7 thrashing in those days. A few ran Micropolis, which were also very reliable, but extremely expensive. The 1335 even has an entire fifth platter just for servo (9th head) but I'm not sure why they didn't put a head on the bottom of the top platter as well, they left one surface without a head for some reason, unlike the ST4096. Very strange. A few moved to ESDI disks, but SCSI came along and most larger systems moved to higher capacity SCSI disks like CDC Wren, CDC or Micropolis, or jumped directly to IDE.

          Ok, enough reminiscing on a wall of text, back to Real Work... :)

          • (Score: 2) by drussell on Wednesday May 01, @05:54PM (1 child)

            by drussell (2678) on Wednesday May 01, @05:54PM (#1355409) Journal

            Oops, I seem to have made an at least one omission and one error in my "common sysop disk" chart above.

            I accidentally omitted the ST251R, and the ST277R is only 40ms access time, not 28ms like the ST251-1.
            (I don't believe there ever was such a thing as a ST277-1R.)

            The correct entries should read:

            Seagate      ST251R  5¼" HH  Stepper    40  820   4   (27.2)  41.6   (49.6)
            ...
            Seagate      ST277R  5¼" HH  Stepper    40  820   6   (40.8)  62.5   (74.5)

            • (Score: 2) by ShovelOperator1 on Thursday May 02, @09:56AM

              by ShovelOperator1 (18058) on Thursday May 02, @09:56AM (#1355525)

              It could be an RLL drive.
              The story of Seagate drives in my country is that we got it a bit late - almost noone could afford it in '85 or '86, only some computing centers bought it for a pile of money, imported it through another country, adapted their controllers and used it to replace e.g. 5 or 6 removable drive devices. Later, ST225s were still accessible, they slowly entered PCs in late 80s, and for cheaper PCs it was possible to buy them in '91 or even '92. My unit could be an RLL.

              Thanks for the information about clones. Generally, about these non-genuine drives I found my notes too:
                - The casts were definitely Seagate, markings underneath the electronics suggested they were 1987-1988.
                - The stickers were missing, but there were no signs of removal. The famous hexagonal sticker was made on a plain paper and contained nearly all information, including a very long number being not a serial number but corresponded to the "model". Genuine drives usually have lots of stamps in different places - there were none.
                - There are small patches in genuine drives' electronics, usually made with kynar wire, and they are generally on the "interface side". In "V1" units, modifications were more in the "HDA side" and included cut tracks too. I remember replacing electronics in a failed drive of the same type, that didn't work. From my notes, I could dump a handful of sectors, but not all for every cylinder.
                - An interesting difference was the cardboard inlet covering the electronics with a small "window" for termination plug. It was present during normal drive use.
                - If You try to format ST225 over its cylinders limit, finally a distinct sound comes from the assembly and you cannot format further. In V2 drives there had to be some mechanical difference as there was no sound, there were only totally bad tracks. My record was "ploughing" an unit to 38MB.

              I suspect the last of Your proposals:

              Number three, they could be genuine Seagate drives that were repaired/rebuilt by a third party repair facility and then sold on, or even rejects from an OEM which werethen sold on by an unscrupulous seller, or whatever. Very grey-market, but still Seagate-factory origin, just unsupported by the factory, only by the seller.

              .

(1)