Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Friday February 09, @07:52AM   Printer-friendly
from the it's-popular-all-of-a-sudden dept.

Everyone knows we should be doing backups. While the standard these days is an online backup (too expensive for a full backup, I use it for important, small things,) or using an external hard drive, SSDs can lose their data after a few years of not being powered on and hard drives are complicated mechanical beasts susceptible to their grease hardening, bearings seizing, etc.

The best option if I want long term backups is to grab good quality blurays and a burner. Is anyone else out there doing this? How are you handling splitting up your data (who only has 32gb of data these days?) Do you just have a dedicated spot on your hard drive to stage backups before burning or are there some software tricks on modern computers like the old days to burn a single "file" across multiple discs? How far back a backup have you recovered, now that bluray's going on 20 years old?


Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Funny) by kazzie on Friday February 09, @08:03AM (9 children)

    by kazzie (5309) Subscriber Badge on Friday February 09, @08:03AM (#1343690)

    I use Laserdisc for all my crucial backups. It'll never become obsolete!

    • (Score: 1, Interesting) by Anonymous Coward on Friday February 09, @09:51AM (4 children)

      by Anonymous Coward on Friday February 09, @09:51AM (#1343697)

      Yeah, I don’t think there is any good solution for long-term archival (write once and forget), with no risk of medium/drive obsolescence or durability failure.
      The only convenient, low cost approach I can think of is a set of external multi-TB USB hard drives, and regular (to be defined according to one’s own circumstances) verification and copy to new drives.

      My PC rsyncs important files to my NAS every day. My NAS incrementally backs up important files to Backblaze every week. I manually back up my whole NAS to external hard drives every other month or so.

      • (Score: 4, Informative) by Unixnut on Friday February 09, @11:04AM (3 children)

        by Unixnut (5779) on Friday February 09, @11:04AM (#1343704)

        The general industry standard for "archival" type backup is tape. Even the "online backup" companies commit your backups to their tape system at some point.

        However few people want to invest in a LTO system for their home backups, myself included, so I make use of hard drives.

        I have a ZFS RAID array where all my data is kept, every 6 months I do a snapshot and back up to drives with 7z compression. Each drive is labelled when it was commissioned and has a date of last backup written on it.

        The drives I use are the previous generation of my array. For example my previous array was 4x6TB drives in raidz2, giving me 12TB total. I now have 4x10TB drives giving me 20TB total. I use the old 6TB drives as backups (24TB total). This lasts until I go through the upgrade cycle again (at which point the previous backup drives go on e-bay). This is usually a 5 year cycle which is well within the lifetime of a hard drive.

        To prevent things like bearing seizure and other mechanical faults it is good to spin up the drives every so often. In my case every 6 months. It also keeps a fresh backup to hand and the act of writing and verifying the data means I effectively test the drive for bad blocks while backing up (any drive that shows errors during backup gets junked and replaced). So far this system has served me well for more than a decade, during which time I have had to restore from backup 3 times (I had no issues with the restores).

        For extra important stuff, I recorded on CDs and DVDs. More than 20 years later they are readable, even the cheap "no-name" ones I bought on a spindle and didn't expect to last very long.

        On Linux/Unix, I can split files using the split [man7.org] command. I use this when I have a tar archive that could not fit on a single disk.

        Beyond that, I never found bearing seizure to be an issue with hard drives. As an example I recently found an old Quantum fireball [wikipedia.org] 4.3GB drive in my attic (vintage 1997) and decided to fire it up. It was stuck but after clicking a few times the drive motor spun up just fine and I was able to read/write data to it without problems (it had an old Linux OS on it). Shame I had no use for the drive when I can get larger micro SD cards now.

        • (Score: 2, Informative) by Anonymous Coward on Friday February 09, @11:18AM (2 children)

          by Anonymous Coward on Friday February 09, @11:18AM (#1343706)

          > For extra important stuff, I recorded on CDs and DVDs

          More specifically those were probably not CD or DVD but CD-R or CD-RW and DVD-R or DVD-RW. Chemically, CD, CD-R, and CD-RW are very different which means very different life spans.

          > To prevent things like bearing seizure and other mechanical faults it is good to spin up the drives every so often.

          In addition to spinning up the tape drives periodically, the tapes themselves need to be retensioned every two years or so. On top of that, they need to be stored in an unpolluted environment at optimal humidity and temperature to be reliable for a decade. At least that's what my time at a national archive taught me.

          Eventually the lubricants on the tape evaporate and you get too much wear during each run. Much later, the adhesives which attach the magnetic layer to the plastic become brittle and don't stick any more. At that point increasing numbers of very tiny pieces of the magnetic flake off with each use, and inevitably some of your data with it. The solution includes periodic migration and the trick is not to replace it when it goes bad but to replace it before it goes bad. Periodically migrating the data to new media is an essential part of any medium- or long-term strategy.

          • (Score: 4, Informative) by Unixnut on Friday February 09, @01:29PM (1 child)

            by Unixnut (5779) on Friday February 09, @01:29PM (#1343714)

            Agreed, any backup system needs to take into account the lifetime of the backup media. AFAIR for LTO it is 30 years, assuming it is stored within the given parameters. No matter what you backup to you will need to cycle through to new media at some point. The difference is how long you can go before you need to do it.

            It is also a good idea to cycle archives to newer standards and technologies, as software itself moves forward with time. This is more a worry for proprietary file formats and standards and Windows software, as I find the *nix world (even some of the proprietary OSes like Solaris) generally put a lot of stock into backwards compatibility for old file formats.

            As all my data is kept on copy on the RAID array I don't have to worry about old formats. Every once in a while I update my backup scripts to use new technologies (e.g. 7z instead of bzip2) and the next backup cycle gets refreshed with the new format. The format of my backups is always what was readable in the last 6 months.

            I've only used CD-R/DVD-Rs for archival, and with that I picked media that had at least 10 years guarantee. These were more expensive than the cheap ones I bought on the spinde, but as I found out some of the cheap CD-R's I burned back then are still readable (these were not done for backups, but general use, but I have kept them all this time).

            Some of the really expensive CD/DVD media apparently has archival lifetime of 100+ years, but there is no way to empirically confirm that until 100 years have passed (assuming they even make drives capable of reading it then, or if you can find old drives that still work).

            I will point out that I do not keep any of my backups in controlled environmental conditions. The drives and CDs are kept in sealed plastic cases to prevent dust ingress but otherwise are subject to the same yearly temperature and humidity cycles as the house.

            I have considered some kind of disk media like blu-ray for a long term archival backup. However the best you can do is quad layer blu-ray at 128GB per disk. Assuming my current 10TB of storage space I would need 80 blu-rays for each backup. It is a bit too much effort to do manually and would take quite a bit longer than just putting a drive in the caddy and letting it backup (takes about 3 days to back up my array to the drives). Plus from what I remember buying the 80 disks and a drive capable of writing to them was more expensive than just buying new drives, while I already had old drives I could re-purpose for backups.

            • (Score: 2, Interesting) by Anonymous Coward on Friday February 09, @01:45PM

              by Anonymous Coward on Friday February 09, @01:45PM (#1343717)

              > I will point out that I do not keep any of my backups in controlled environmental conditions. The drives and CDs are kept in sealed plastic cases to prevent dust ingress but otherwise are subject to the same yearly temperature and humidity cycles as the house.

              If you are near a highway, factory, or down town the NO2 and other corrosive pollutants will work their way into the metal layer from the edge. The protective plastic layer on the top does not go all the way out to the edge of the metal layer so the gasses can cause trouble even in low concentrations over a longer period of time.

    • (Score: 2, Insightful) by Anonymous Coward on Friday February 09, @01:28PM

      by Anonymous Coward on Friday February 09, @01:28PM (#1343713)

      My optical backup strategy just worked on a small file saved in 1982 and not touched until a few days ago. It was on paper, printed by a daisy wheel printer connected to a Z-80 CP/M computer that I had back then, and it OCR'rd perfectly. As long as you have the floor space, metal filing cabinets are hard to beat.

      Try that with a 42 year old optical disk!

    • (Score: 4, Funny) by DannyB on Friday February 09, @02:49PM (1 child)

      by DannyB (5839) Subscriber Badge on Friday February 09, @02:49PM (#1343721) Journal

      Forget Laserdisc. I use cheap USB sticks for all my backups. They are cheap. They are Op Tickle. They come in huge 15 TB advertised sizes. What's not to love?

      --
      When trying to solve a problem don't ask who suffers from the problem, ask who profits from the problem.
      • (Score: 2) by RS3 on Friday February 09, @03:49PM

        by RS3 (6367) on Friday February 09, @03:49PM (#1343734)

        Another advantage: that USB stick's 16GB chip uses far less energy than the 15TB of chips would use. How very green of you!

    • (Score: 3, Interesting) by c0lo on Saturday February 10, @07:00AM

      by c0lo (156) Subscriber Badge on Saturday February 10, @07:00AM (#1343813) Journal

      Just stare long enough at the documents/pictures/whatevs then commit them to your wetware memory - you'll always carry them with you.
      Just avoid Pharmakom's attention and avoid street preachers.

      --
      https://www.youtube.com/watch?v=aoFiw2jMy-0 https://soylentnews.org/~MichaelDavidCrawford
  • (Score: 5, Interesting) by Mojibake Tengu on Friday February 09, @08:21AM (17 children)

    by Mojibake Tengu (8598) on Friday February 09, @08:21AM (#1343693) Journal

    Splitting user data is a fundamental initial strategy: I keep myself fragmented into several Unix users (up to a dozen) by the logical purpose of activities.
    That helps for single fragment migrations between physical machines (or whole platforms) and also for backups.
    It also serves as a security fence (not a security barrier), prevents own mistakes.

    --
    Respect Authorities. Know your social status. Woke responsibly.
    • (Score: 4, Interesting) by JoeMerchant on Friday February 09, @10:57AM (4 children)

      by JoeMerchant (3937) on Friday February 09, @10:57AM (#1343700)

      I never did much serious optical backup. By the time writable DVD came around, hard drives and later SSDs were cheap enough to run redundant copies live. I should do better geographic separation of the redundant backups than I do, but so far that lack hasn't bitten me.

      Back in the days of tape and floppies we would do triple or more redundant backups. Tape would occasionally triple fail on us, maybe 0.2% of the time. Floppies were more like a source repository with weekly full backups thrown in a fireproof safe, so we could go back to any "weekly commit" if desired, though like modern git we did maintain the occasional branch but never really went back to the tags.

      --
      🌻🌻 [google.com]
      • (Score: 5, Interesting) by RS3 on Friday February 09, @04:00PM (3 children)

        by RS3 (6367) on Friday February 09, @04:00PM (#1343736)

        Back in the day I used to use floppies quite a bit, always found them reliable. However, what little I've used them in the recent 10-20 years, I can barely read them. I have an assortment of floppy drives and have spent (wasted) quite a bit of time swapping out drives just to read an important floppy.

        That said, as a lifelong hardware hacker I know how critical the mechanicals and alignments are in floppy drives, and how easily they can drift. I always meant to get alignment disks and software, but never did, and the need is waning, almost nil.

        In the 90s, medical equipment company I worked for had a strong European presence. They _required_ optical backups, so the company got into CD "burners" early. IIRC Plasmon drives, cost in the thousands, wrote at 1X (iirc). I burned several CDs that are still readable.

        We also used "magneto-optical" drives.

        Otherwise we used DAT tape for US market.

        What size/format tapes did you have problems with? 9-track reel-to-reel? :)

        Only half-kidding- years ago I bought a '60s 9-track reel-to-reel tape drive. It mounts the 2 reels concentrically. All discrete transistor logic. It ran and moved the tape. I never messed with it further, but it's still sitting on a shelf in the basement. Quite small unit- 19" rack, maybe 22" tall. One of hundreds of future projects...

        • (Score: 3, Interesting) by JoeMerchant on Saturday February 10, @01:42AM

          by JoeMerchant (3937) on Saturday February 10, @01:42AM (#1343799)

          My 16 bit Atari (forget the designation) had a built in floppy, with a belt drive. It's remarkable how quickly those things stop working, and how irrelevant that was back in the day when the whole computer was deeply obsolete within five years.

          --
          🌻🌻 [google.com]
        • (Score: 3, Interesting) by turgid on Sunday February 11, @05:33PM (1 child)

          by turgid (4318) Subscriber Badge on Sunday February 11, @05:33PM (#1343986) Journal

          Well over 20 years ago I had to buy a replacement 3.5" floppy drive for my PC because it broke and I still had some backups on floppy that I needed. I copied everything off floppy and onto CD. When I bought the replacement drive, I bought a spare which is still in the original wrapping, just in case I ever needed such a thing again.

          When I learned about things like dd, I always made sure that whatever medium something was on, if it was important, I'd always make a byte-for-byte copy with dd. I was able once to restore a Windows laptop to factory default (Windows not configured) because I had dd'd the disk image (and piped it through gzip) onto a spare external drive, having booted from a Linux USB stick.

          I've had PCs of one sort or another for nearly 30 years. I keep offline backups, but I also keep live copies on multiple hard disks which I copy between machines with rsync. Over the years, as I upgrade, and replace hard drives, I keep the copies around on the new disks. Disks are cheap. Buy lots of disks.

          • (Score: 3, Insightful) by RS3 on Sunday February 11, @09:28PM

            by RS3 (6367) on Sunday February 11, @09:28PM (#1344001)

            AKA "drive image". Yes, been doing them forever too.

            However, sometimes a drive has problems, and dd isn't fault tolerant. Then I found "ddrescue". You might like it, or at least someday be relieved that it exists.

            Perhaps you know of the "loop" filesystem driver? It allows you to mount an image file (drive or just partition image) as a mounted filesystem. You can then save, delete, edit files just like a normal r/w filesystem. Of course you can dd it back to physical media too.

    • (Score: 3, Insightful) by driverless on Friday February 09, @11:36AM (11 children)

      by driverless (4770) on Friday February 09, @11:36AM (#1343708)

      That's what I do too. I make a monthly backup of data from work in progress, so not every byte I can scrape together from the entire system but just the data I'm working on, to two CDRs from different manufacturers using two different CD writer programs. They're cheap enough that I can just burn through two a month, and I've got monthly snapshots of anything should I need to roll back to a certain point in time. In my case it's also a read-only audit log of work performed over time.

      In case of a worst-case total system loss, reinstall the OS, apt-get new copies of everything I need, copy the data back off the last CD burned, and restore anything else from daily incrementals.

      • (Score: 3, Informative) by JoeMerchant on Friday February 09, @01:30PM (10 children)

        by JoeMerchant (3937) on Friday February 09, @01:30PM (#1343715)

        I had a daily driver laptop hard drive fail (around 1995) with about 3 weeks of "new work" on it since the last backup.

        The replacement laptop was on-hand, so no waiting for it to arrive - that could have been a week of downtime right there if ordering "cold."

        Installation/configuration of the tools suite took about 2 days in those days...

        Restore from 3 week old backup took 5 minutes, and it only took about 3 days to re-create the valuable parts of the previous 3 weeks Research and Development work.

        --
        🌻🌻 [google.com]
        • (Score: 2) by Freeman on Friday February 09, @03:39PM (9 children)

          by Freeman (732) on Friday February 09, @03:39PM (#1343731) Journal

          Yeah, full backups aren't exactly necessary. It's much better to have important data that you need backed up, in triplicate, multiple formats, and multiple locations. Otherwise, it may be annoying to re-download your steam catalog/tools/drivers/etc, but you're not losing data there. It's a lot cheaper to maintain backups, if you're not also trying to keep your OS+other random software backed up as well.

          --
          Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
          • (Score: 2) by RS3 on Friday February 09, @04:06PM (8 children)

            by RS3 (6367) on Friday February 09, @04:06PM (#1343737)

            Early 90s I worked at a "systems integrator" - industrial automation engineering "job-shop" company. Clients were all top-tier. I did various stuff there. One was to install and configure a servers and workstations. One server was OS/2. Had a tape drive.

            The backup software "watched" the filesystem and would do immediate incremental backups as tagged files were changed. I have no idea what backup software that was.

            I've always wanted to find that kind of functionality for backup software. Anyone know of such a thing?

            • (Score: 5, Funny) by sgleysti on Friday February 09, @04:40PM (3 children)

              by sgleysti (56) Subscriber Badge on Friday February 09, @04:40PM (#1343744)

              OneDrive.

              <ducks>

              • (Score: 2) by RS3 on Friday February 09, @05:36PM (2 children)

                by RS3 (6367) on Friday February 09, @05:36PM (#1343747)

                Hmm. Interesting idea. Will it work on my Linux machines? :)

                Hello? Hello?? Are you down there? I swear I saw you duck into a rabbit hole.

                • (Score: 2, Informative) by Runaway1956 on Friday February 09, @06:18PM (1 child)

                  by Runaway1956 (2926) Subscriber Badge on Friday February 09, @06:18PM (#1343753) Journal

                  Not sure about Onedrive, but Gdrive doesn't care about operating system. I can stash things from Linux, Mac, or Windows. Sorry, I can't help any further - if that is any help- I've never used my Gdrive for backups.

                  • (Score: 2) by RS3 on Friday February 09, @06:39PM

                    by RS3 (6367) on Friday February 09, @06:39PM (#1343758)

                    I was being facetious / sarcastic.

                    On a serious note, I'm not dead-set against "cloud" storage, but I'd encrypt anything I put there.

                    Also, don't count on it being available on demand. There've been many infamous cloud outages over the years. I'd do local backups plus cloud.

                    On a practical note: you could do a OneDrive backup where you have Windows machine as an interstitial translator between Linux and OneDrive.

            • (Score: 3, Informative) by turgid on Saturday February 10, @10:00AM (2 children)

              by turgid (4318) Subscriber Badge on Saturday February 10, @10:00AM (#1343826) Journal

              Nearly 20 years ago I worked for a storage company that did a NAS product with an archival system to optical disk. The server ran Linux with XFS and on top of that there was a special filesystem which did just that, and had been developed for tape archival. I will try to remember what that filesystem was called.

              The special filesystem ran on top of XFS (I think that wasn't important, I think it would work with any Linux filesystem) and it detected changes to the underlying filesystem and called back out into user space where you could put your code to handle the events, for example to migrate a file from the RAID disks to the optical disk storage. So when a new file was written, after a period of time, the migration happened. That way, for a few hours or days or whatever, the file was still on the RAID and you could access it instantaneously. After that time, it had been fully migrated to the optical storage (two separate copies on long-life media). It was still listed in the directory, but when you went to access the file, it wasn't on the RAID any more so it would go and pick the right optical disk from the jukebox and put it in the drive, mount it and you got access to your file.

              • (Score: 2) by RS3 on Saturday February 10, @02:30PM (1 child)

                by RS3 (6367) on Saturday February 10, @02:30PM (#1343854)

                Very interesting. Reminds me of computing, before my time (50s, 60s, 70s?) where stuff would be "out" on tape, cards, whatever, and you'd put in a request to "restore the data set" or some such nomenclature.

                I've always regretted not taking notes on the aforementioned OS/2 server's backup software. I was just a grunt at that point. I was pointed to computers and boxes of hardware and software and put them together, installed and configured stuff, etc. I don't even remember who the customer was. Someone big, like a Coca-Cola, Sherwin-Williams, ... I just remember anything you did on the server or a workstation that changed a file on the server, in tagged directories of course, the tape drive would light up immediately. Quite sure it was a Travan [wikipedia.org] tape drive.

                At the time it just seemed quite normal that it would behave that way, and from then on I've been surprised that I've never seen that behavior in any backup software. I'll have to do some research. Doesn't seem all that difficult to add a software hook into the filesystem manager to get a "hey this file changed" message and do a copy to tape or whatever. Probably better to another drive / filesystem that's then further backed up.

            • (Score: 3, Informative) by driverless on Sunday February 11, @01:03AM

              by driverless (4770) on Sunday February 11, @01:03AM (#1343921)

              Under Windows I can recommend a program called SyncBack, it's shareware-ish so you can get a free version as well as reasonably-priced fancier paid ones, for most people the free version should be fine. I've set it up for friends&family who would otherwise never make backups, it watches for changes and at configurable intervals copies changed files to another location (USB key, external storage, server, cloud, etc), with the option to keep multiple generations of changes, so you can keep (say) five generations for each file. I use this in between the monthly backups, with the same redundant copies made in case one of the servers/media types fail.

  • (Score: 5, Informative) by canopic jug on Friday February 09, @08:47AM (9 children)

    by canopic jug (3949) Subscriber Badge on Friday February 09, @08:47AM (#1343695) Journal

    While the standard these days is an online backup [...]

    If it's not disconnected, offline, it is just another copy and not a backup. That is the reason you must rotate multiple media because during the time a backup is plugged in and transferring, it is not a backup but just another copy.

    I used CD-R for quite a few years, back when all my important files would still fit there. However, poor quality CD-R does not last long. I found that out the hard way and lost a few important files that way. Nowadays for the SOHO, I use removable hard drives, spinning rust, formatted for EXT4. Rsync with --dest-dir is the tool of choice for those incremental backups. However, I really ought to reconsider EXT4 and go with XFS with Copy-on-Write enabled or figure out BtrFS or OpenZFS in that context, so I can spot corrupted files in time to restore them from another copy.

    --
    Money is not free speech. Elections should not be auctions.
    • (Score: 2) by JoeMerchant on Friday February 09, @11:02AM

      by JoeMerchant (3937) on Friday February 09, @11:02AM (#1343702)

      I have tried to employ OpenZFS a few times over recent years, I have used it without too much pain on my daily driver desktop, but it still has(had?) an unresolved unacceptable wrinkle when interacting with Virtual Box on Ubuntu 22.04.

      --
      🌻🌻 [google.com]
    • (Score: 3, Interesting) by vux984 on Friday February 09, @09:31PM (5 children)

      by vux984 (5045) on Friday February 09, @09:31PM (#1343774)

      A backup to an online provider (e.g. cloud backups) via agent software, with multiple restore points, retention policy and object locking enabled IS a backup; and I'd argue that for most soho and small business cases it is the BEST option currently available, by a long shot.

      The alternative options, from optical, to tape, to removable hard drives are all less reliable and more susceptible to problems. In practice, you are more likely to 'discover' the data you need isn't recoverable from physical media when you need it.*

      * Again - these comments apply to soho and small businesses -- where there isn't a onsite dedicated team managing backups, and the backups are in a box on a shelf in a closet, often rotated by a secretary who dutifully rotating tapes into a tape drive that failed 6 months ago, or rotated by a secretary who left the company 6 months ago and nobody was tasked to manage it since.

      Ideally you'd do both physical and cloud, but in my experience if you had to pick one - the recoverability of data stored to a suitable cloud provider is much more reliably recoverable than any soho/small business managed physical infrastructure ever is.

      • (Score: 1, Interesting) by Anonymous Coward on Friday February 09, @10:37PM (3 children)

        by Anonymous Coward on Friday February 09, @10:37PM (#1343782)

        IT is not immune to that either. One place I worked at had 600 employees including three full time IT. The backup was daily with a company paid to pickup and store the tapes offsite. They ran 6 daily tapes / five friday tapes / 12 end of month tapes in the cycle. The tape backed up the two main databases and then rewound to do a verification read then a final rewind. The verify was reported to the IT manager.

        The middle one of the three IT guys decided that there was plenty of room on the tape so he added stuff he was working on to the backup. The new sequence went Write1, Write2, Rewind, Verify, Report to IT manager, Rewind, Write IT guy's machine, rewind.

        A year later the system went down hard and needed a full backup. They were very lucky they only lost four months, because they happened to have a spare backup in the office from when they were doing an upgrade.

        • (Score: 2) by RS3 on Saturday February 10, @02:51PM (2 children)

          by RS3 (6367) on Saturday February 10, @02:51PM (#1343856)

          I don't know whether to laugh or cry. I'm sure middle guy learned the lesson, but I'd never trust him. I'm sure there are many other vocations he'd be much better doing.

          • (Score: 3, Insightful) by vux984 on Monday February 12, @11:34PM (1 child)

            by vux984 (5045) on Monday February 12, @11:34PM (#1344157)

            It's a good illustration of the fundamental issue with backups -- the only data you really actually know you can recover is data you *just* recovered.

            So you verified the backup after you wrote it. Maybe that was six months ago. Unless you verified it again just now, then you really don't know the data is still there and readable.

            And worse, 99% of so-called 'verified' backups is just the system verifying the data that was just written can be read back. It doesn't actually verify that the data you NEED was ever written at all. The only way to have a hope of catching this is to a full restore to a new test-recovery environment, with the applications that use the data installed (and in many cases even licensed), and have the actual users run a complete sweep of the functionality to make sure everything works.

            I've seen it countless times, including once where it was pretty catastrophic where the backup admin had been advised that he needed to backup the MS SQL server databases, and did so diligently, and even verified them, and manually tested that he could recover them to another SQL server and access the data, and that the number of records was correct etc. But, it turned out there was also also a separate data folder that contained another database using a different embedded data engine. FirebirdSQL embedded maybe, I don't recall exactly what it was. But it was a crucial part of the system.

            When the server eventually crashed, the backup admin dutifully recovered the SQL server backups, and passed them to back to the team the application vendor had rebuilding the environment, and it was ONLY THEN discovered that nobody had ever specified that an additional data folder also needed to be backed up. The application vendor had only specified the SQL server database needed to be backed up. The data folder was part of another component that integrated with main application, so it's requirements were 'separate', and its 'separate' requirements were, near as we can tell, never properly disclosed by the vendor. Although since we'd never done a full recovery, we had some responsibility as well arguably -- but given that our IT team needed the app vendor involved to any sort of real environment 'recovery', its not something that could really be done by our team in isolation - we'd have needed to commission the vendor to do a special project of recovering everything to a test environment (with them billing us heavy consulting fees to do all the work.

            I've also seen backups fail because key defaults of the backup software were left in place. I wasn't involved with either of these myself, but I know of a site that was doing 'full backups' but the backup software had a default exclusion for files over 2GB, so the VHD files (virtual disk images) of all the virtual machines they were using for testing etc had simply been omitted from the backups the entire time and all had to be rebuilt from scratch. Or I know of another case, where the backups were excluding exe, dll, and msi files by default -- but they were doing software development, so they were producing exe and dll, and msi files -- so all their source was present in the backups, but their entire library of installers for previous builds had simply been omitted from the backups. In principle they could regenerate them from the source repos as each version was tagged in history, but it was a huge PITA and stuff like, for example, the various code-signing certificates they'd originally used to sign the original files was long expired and gone, and the tooling they used for older versions was of course older than what they used now, and wasn't running on the same version of windows they'd originally used, and even those sorts of differences ended up causing issues trying to create replacement build artifacts and installers for old versions.

            • (Score: 2) by RS3 on Tuesday February 13, @01:17AM

              by RS3 (6367) on Tuesday February 13, @01:17AM (#1344167)

              Yes, very good points, and what a nightmare. Got me thinking about always doing _full_ backups, plus incremental, but then maybe some kind of semi-automated system that would restore everything to some spare computers, do mass file compares, etc. All of which could fail, and/or be a bugger to diagnose and fix.

              Then you got me thinking: what if hard disks were made much more reliably. Much more. It can be done. They'd be slower, less byte-dense, but in the long run far less costly. You'd still use RAID, and still do backups.

              Webserver I inherited admin of had been RAID 1. Not my normal preference but no need to change things. Well, a MB hardware failure cause some kind of software catastrophe that trashed both drives' data. So much for mirrors.

      • (Score: 2) by canopic jug on Saturday February 10, @08:02AM

        by canopic jug (3949) Subscriber Badge on Saturday February 10, @08:02AM (#1343816) Journal

        The alternative options, from optical, to tape, to removable hard drives are all less reliable and more susceptible to problems. In practice, you are more likely to 'discover' the data you need isn't recoverable from physical media when you need it.*

        Temperature swings will quickly ruin tapes. Extreme ranges of temperatures will do so even faster. Physical knocks and vibrations will as well.

        At one place I was at a long time ago, one of the senior people bragged often about having tapes of gopher, web, and other server logs going back to the dawn of the web. Once we started getting into log analysis and studying trends, I asked where the tapes were and for access to the old files. It turns out that the guy responsible for the backups had been keeping them in the trunk of his car, year round, sun or snow. I didn't need to ask further and changed topic and never brought it up again. Coincidentally, those particular tapes were never ever mentioned again after that.

        --
        Money is not free speech. Elections should not be auctions.
    • (Score: 3, Touché) by pdfernhout on Saturday February 10, @02:58PM (1 child)

      by pdfernhout (5984) on Saturday February 10, @02:58PM (#1343859) Homepage
      --
      The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
      • (Score: 3, Informative) by canopic jug on Saturday February 10, @03:46PM

        by canopic jug (3949) Subscriber Badge on Saturday February 10, @03:46PM (#1343863) Journal

        Correction. I should not type from memory. The correct option is --link-dest=DIR

        One of the ways I use it is to point to the previous backup's directory. That way I can have multiple directories with earlier versions of whatever was in place.

        The other option I use is --delete, since I reuse the directories. But that one must be used with care.

        --
        Money is not free speech. Elections should not be auctions.
  • (Score: 2, Insightful) by shrewdsheep on Friday February 09, @10:22AM (3 children)

    by shrewdsheep (5215) on Friday February 09, @10:22AM (#1343698)

    at the suggestion of using optical media for backups. I posted my experience on some optical media recently: https://soylentnews.org/comments.pl?sid=59722&cid=1343320 [soylentnews.org]. Always remember that data has to be kept fresh and has to be copied at regular intervals whatever media you use. Are you willing to copy your blue-rays every other year (that would be my personal safety margin)? I have to admit though that I have no experience with blueray (do not even own a drive).

    • (Score: 1) by shrewdsheep on Friday February 09, @11:10AM (2 children)

      by shrewdsheep (5215) on Friday February 09, @11:10AM (#1343705)

      Sorry, wrong post. Use instead: https://soylentnews.org/comments.pl?sid=59681&cid=1342945 [soylentnews.org].

      • (Score: 4, Informative) by RS3 on Friday February 09, @04:20PM (1 child)

        by RS3 (6367) on Friday February 09, @04:20PM (#1343740)

        I'll reply here: maybe many (most) have found this to be true, but over the years I've found that the faster you write an optical disk, the less reliable, or completely unreadable it is. Obviously there are many factors, including quality of the optical media. I've had people say "the purple ones are the best" and others say "the purple ones are horrible". I've had name brand ones fail at high rates, and no-names 100% perfect, still usable many years later. DVD+R, or DVD-R. I dunno on that one.

        Regarding burn speed: one acid test I use: I still use a very old Philips CD player (audio- it works, I'm used to it). It will skip, or not play at all a CD burned at 48 or 52X. Some newer media won't let you burn at 2 or 4X. I have some older burners that burn at 2, 4, 8X. All that said, I haven't burned a CD in many years, but still have a few audio projects (mix/master) that need to go to CD masters, as well as mp3 or whatever else people want.

        All of the above absolutely applies to DVD. I have no problem burning at 2X, or even 1X if the burner will do that. Let it run overnight, or whenever- I have machines dedicated to burning tasks- it can take as long as it wants.

        No experience with Blu-ray, but I'm sure the same principles apply: good media (run tests on a sample from a batch) and slow as possible burn speed.

        • (Score: 3, Informative) by deimtee on Friday February 09, @10:42PM

          by deimtee (3272) on Friday February 09, @10:42PM (#1343784) Journal

          I will absolutely back you on the burning speed. I always ran them at 2X or 4X even if the drive said it could do 16 or 32. Disks occasionally failed straight out of the drive if you ran them at the max speed. Most of the slow burn ones are still readable, in some cases over twenty years later.

          --
          If you cough while drinking cheap red wine it really cleans out your sinuses.
  • (Score: 5, Insightful) by darkfeline on Friday February 09, @11:02AM

    by darkfeline (1030) on Friday February 09, @11:02AM (#1343703) Homepage

    The implication is that you're going to make the backup and stash it somewhere for a couple of years. You've already fucked up.

    If you aren't testing it regularly, you don't have any backups. If you are testing it regularly, the medium isn't particularly important because you can just replace it when it goes bad. When, not if. It will go bad, especially if you aren't testing it regularly.

    --
    Join the SDF Public Access UNIX System today!
  • (Score: 5, Insightful) by anubi on Friday February 09, @11:25AM (11 children)

    by anubi (2828) on Friday February 09, @11:25AM (#1343707) Journal

    I have been doing my Arduino, Eagle, and personal stuff for the last fifteen years or so on a Compaq laptop I bought at WalMart.

    First, I was cloning ( Clonezilla) to backup drives, but later, when I found these same laptops on eBay for 50 bucks or so, I bought a dozen of them, identical to mine, cloned them, and slowly rotate through them. This way, I always have backups for both hardware and software as it's no longer supported. If anything fails, I have a source of spare parts. They only see my local intranet, as I FTP my files around as needed.

    I had experienced floppies, hard disk, and optical bit rot, so hopefully I have frozen my toolset enough it should last me the rest of my life.

    I now use my android phone to browse the internet. I simply considered trying to keep up with the Joneses too time consuming, risky for external tampering ( whether by Microsoft Subscription enforcement, Forced obsolescence via internet connection enforcement, or malicious hackers encrypting my files ). My system contains stuff I did 40 years ago. I still support it. I don't mess with businessmen that only see two or three years out. If I am going to build it, it will work far beyond my lifetime. I also still have and use my Grandpa's tools too! ( His old Triplett 630, albeit the high ohms scale no longer works until I make some lithium coin cell holders that mimic a Burgess U20 30 volt battery).

    I don't know why we make so much work for ourselves, filling out landfills with toxic electronic waste, that still works. And making disposable packaging from nearly eternal plastics. If people don't start embracing design for maintainability, I fear we will all ending up living in the dump, cuz nowhere left to put it all!

    --
    "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
    • (Score: 0) by Anonymous Coward on Friday February 09, @10:52PM (10 children)

      by Anonymous Coward on Friday February 09, @10:52PM (#1343787)

      Not that I want to support amazon, but you can probably find it somewhere else as well.
      https://www.amazon.com/Exell-Battery-413A-Alkaline-BLR123/dp/B00BNF7R98 [amazon.com]

      • (Score: 2, Interesting) by anubi on Saturday February 10, @12:29AM (9 children)

        by anubi (2828) on Saturday February 10, @12:29AM (#1343795) Journal

        Thanks. I sure wish they offered them in Lithium.

        I have had really bad luck with alkaline batteries in things that rarely need batteries changed. They leak.

        And I don't discover it until the thing stops working, only to discover what used to be the battery is now corrosive green goo, and if I wanna do it right, replate the connectors. For now, I have been cheating by cleaning the connector pads and dabbing them with the silver conductive paint I use to repair PCB and old phenolic cheapie consumer-grade volume controls where the solder lugs are braided onto the resistive element.

        --
        "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
        • (Score: 0) by Anonymous Coward on Saturday February 10, @02:12AM (8 children)

          by Anonymous Coward on Saturday February 10, @02:12AM (#1343801)

          Well you could always just stack 10x CR2032's.

          Me, I wish you could still buy Mercury batteries for old cameras. Even if they made you send in the flat one before they would sell you a new one, which would mean zero extra mercury out there, and would probably result in cleaning up old batteries as you'd get a secondary market for the dead ones.

          • (Score: 2) by RS3 on Saturday February 10, @04:12AM (7 children)

            by RS3 (6367) on Saturday February 10, @04:12AM (#1343807)

            Your 10 stack is probably a good idea.

            Curious: what cameras do you need mercury batteries for? Why specifically mercury? Is it the size you can't find, or voltage, or?

            • (Score: 1, Informative) by Anonymous Coward on Saturday February 10, @01:35PM (6 children)

              by Anonymous Coward on Saturday February 10, @01:35PM (#1343848)

              Old konicas for me. But any old camera really, that had a light sensor. They used a mercury cell as a voltage reference because the voltage is exceptionally stable, 1.35 volts regardless of state of charge, and a service life of many years. The only options now are zinc-air, same voltage but a lifetime of several weeks once activated, or get the camera re-calibrated to use silver oxide cells.

              • (Score: 2) by RS3 on Sunday February 11, @09:37PM (5 children)

                by RS3 (6367) on Sunday February 11, @09:37PM (#1344003)

                Good stuff. So there's a market for me to design a replacement cell with a smaller battery cell that has higher inherent voltage + voltage regulator built in. Hmmm. Electrically trivial of course, and would be temperature compensated (stable).

                Or just mod the light meter adding in the V-reg.

                I still have a Nikon F with light meter and one or two batteries, probably mercury 1.35V? I haven't even looked at it in years.

                • (Score: 0) by Anonymous Coward on Monday February 12, @07:35AM (4 children)

                  by Anonymous Coward on Monday February 12, @07:35AM (#1344048)

                  If you can get the price down great, but they're out there already, they just cost about forty bucks each. They consist of a shell the right size (PX625 battery case works well) , a schottky diode to drop the voltage, and a recess to hold a small silver oxide cell.

                  I have quite a few Konicas, but at some point in my "copious free time" I will probably pick my favorite to use, open it up and put a couple of schottkys in the battery lead. (Konica's take two batteries, for 2.70 V) That will let me use the full size AgO replacements and they'll last a lot longer than the little ones that fit in the adapters.

                  • (Score: 2) by RS3 on Monday February 12, @02:32PM (3 children)

                    by RS3 (6367) on Monday February 12, @02:32PM (#1344086)

                    Oh, interesting. Are the silver oxides long-term stable enough? I was thinking that if hardware hack mods are going to be done, might as well throw in a regulator.

                    Sort of similar- ever see those battery life extenders? They use switched-capacitor boost to get more out of a dying battery. I first thought of those and/or voltage multiplier for the 30V battery problem. But now that I'm on it, an inductive booster would probably work best for the 30V battery substitute, but 9 coin cells would work well for sure.

                    • (Score: 1, Informative) by Anonymous Coward on Monday February 12, @09:02PM

                      by Anonymous Coward on Monday February 12, @09:02PM (#1344140)

                      Yeah they are stable enough, but they are 1.55 volts. That's enough to throw the light meter way off. You can compensate by lying to the camera about the film speed or just shooting in full manual, but the response of the CdS cell is non-linear and it takes practice and experience to get it right for different light levels. The difference is somewhere between 1 and 4 stops, depending.

                    • (Score: 1, Interesting) by Anonymous Coward on Monday February 12, @09:08PM (1 child)

                      by Anonymous Coward on Monday February 12, @09:08PM (#1344141)

                      Thinking on yours, to get that form factor you might need to go for a smaller cell unless there is spare space inside the case. It's only 0.62 x 0.62 x 1 inches. If there is room, you could go for a stack of 12 with a regulator and adjust it for accuracy. Probably want to macgyver a switch into though, if you only rarely use it. I don't know the internals, maybe you could put the regulator after an existing switch?

                      • (Score: 2) by RS3 on Monday February 12, @11:19PM

                        by RS3 (6367) on Monday February 12, @11:19PM (#1344154)

                        Interesting, I didn't know the 30V was that small. That's tiny. Most small voltage regulators have negligible quiescent current draw, so I wouldn't worry about it. But I'm betting there's room in the Triplet for batteries, and maybe a way to rig a switch. Likely the main selector switch could be utilized- I'd have to look at a schematic.

                        If it was mine, I'd make a step-up inverter with regulation. There are many great SMPS chips that will drive a tiny ferrite-core transformer.

                        I never had a Triplet that I remember. I don't remember what I had way back when I was a kid early 70s. At some point I bought, and still have, a Radio Shack FET meter with d'Arsonval display. 10 M Ω input impedance. I haven't used that in years though.

  • (Score: 1) by shrewdsheep on Friday February 09, @12:57PM

    by shrewdsheep (5215) on Friday February 09, @12:57PM (#1343710)
  • (Score: 3, Insightful) by Runaway1956 on Friday February 09, @01:13PM (3 children)

    by Runaway1956 (2926) Subscriber Badge on Friday February 09, @01:13PM (#1343711) Journal

    When I was new to computers, I did the backup thing somewhat diligently. Right up until the time that a full backup was going to require more than 100 floppies. I did a little mental math, and decided pretty quickly that I could reinstall the system and backup key bits of stuff, faster than I could do a backup. And, a reinstall meant that I didn't need to invest in a bunch of new floppies.

    We're beyond that point now with optical media, aren't we? How many DVDs are required to back up a 1 or 2 TB system? I'd rather do something more exciting, like maybe watching paint dry.

    All I do is backup documents, and a few other things. You can fit an amazing number of text files on a DVD. Do a system recovery DVD, and call it quits. Either that, or do the cloud thing, either to your own NAS or to the internet. Whether you're using your own internal "cloud" or an online cloud, that can all be automated. I can't imagine anyone sitting down with a case of DVDs to do a full backup today.

    • (Score: 2) by Freeman on Friday February 09, @03:44PM (1 child)

      by Freeman (732) on Friday February 09, @03:44PM (#1343732) Journal

      Even going with blu-ray, the time to wait for an optical drive to burn that data to the disc, plus all of the changing of discs, etc. You've gotta have a really good reason to go through all that work. As opposed to spending a fraction of the time and possibly the same or possibly even way less money to write it all to SSD/HDD. I've not done a price per GB calculation for CD/DVD/Blu-ray, perhaps ever. Whereas you can get NVMe SSDs where the price per GB is $0.06 and that's not the cheapest piece of junk you can find.

      --
      Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
    • (Score: 2) by VLM on Friday February 09, @05:35PM

      by VLM (445) on Friday February 09, @05:35PM (#1343746)

      a reinstall meant

      Another interesting technological change over the decades is Puppet/Ansible/other means I don't reinstall, I just have to restore data. It still takes some clock time, but is pretty much hands off aside from enduser data.

      Speaking of the difference between operations and enduser data, I've also noticed that separates. Back in the old days we always stored OS/apps and data on the same drive, if not in the same directories, LOL. Now a days everything is very separate and the challenges of keeping a physical or virtual machine up are separate from keeping the enduser data up.

      I have one Python development project that's a resource hog so I wipe it when I'm not using it; the ansible script that creates it does everything I need to make it work up to and including a git clone of the repo and setting up the project build at the end. This mindset is an outgrowth of Jenkins and similar CI/CD processes. If you "have to" set up unit testing, its a small step to fully automating everything including deployment and development. Eventually this mindset leads to a lot of terraform scripts and docker images and some backed up user volumes and that's about it. "Backing up" my home audiobook collection used to be a puzzle, now its just run the airsonic docker image and backup the data volume once in awhile to a rotating set of large USB flash drives.

  • (Score: 5, Funny) by Snospar on Friday February 09, @04:32PM (1 child)

    by Snospar (5366) Subscriber Badge on Friday February 09, @04:32PM (#1343743)

    Carefully by the edges.

    --
    Huge thanks to all the Soylent volunteers without whom this community (and this post) would not be possible.
    • (Score: 0) by Anonymous Coward on Friday February 09, @10:01PM

      by Anonymous Coward on Friday February 09, @10:01PM (#1343777)

      I came here to say that.

  • (Score: 1, Insightful) by Anonymous Coward on Friday February 09, @10:41PM (1 child)

    by Anonymous Coward on Friday February 09, @10:41PM (#1343783)

    Cloud storage, code repository and even camera memory all in play. I've got a Cannon, and when you delete pictures and video it keeps its hands off files that aren't in the DCIM directory. A ZIP'd copy of my code lives in there. It's usually behind the repository, but if my PC *and* the repository somehow both went south, I wouldn't be set ridiculously far back.

    • (Score: 2) by RS3 on Saturday February 10, @04:12PM

      by RS3 (6367) on Saturday February 10, @04:12PM (#1343866)

      Not bad, but remember the camera's memory is FLASH, which eventually "wears out", and like anything, can fail.

      I remember people talking about using chemical photography film for permanent storage. Besides "microfilm" and "microfiche" with images, which you can scan and digitize / OCR, you can simply store bits on film, which seems to have extremely long lifetime if kept in reasonable environment. Seems there would be a market for that somewhere.

      All that said, I have some very old hard drives- late 80s - early 90s- that still work. IMHO, the key was you could re-"low-level" format the drive, which compensated for normal mechanical wear-in and alignment changes over time. I've always felt the spinning rust industry shot themselves in the foot when they stupidly took away the ability to do a real re-format.

      For anyone who may not remember those good old days, hard drives came with a Defect Map- sectors that were deemed too damaged to be used- printed on the lid. You had to hand enter those during a format operation so the OS would not try to save data on them.

      Somewhere back then I found SpinRite (GRC software). It would fully test the drive, including disabling error correction, and in more advanced modes, it would (will) copy a track to a safe area, re-low-level format that track, test it extensively, correctly mark defects, and replace the data. I used to run SpinRite fairly regularly on the drives of that time and I never had data loss due to sector defects.

      Taking it further: in those days the data circuits, system bus, CPU, whatever, could not necessarily handle the speed of data coming from sequential sectors out of a spinning drive. Some could, but for the slower ones, you would interleave the sectors to optimize throughput. IIRC SpinRite would test and determine the optimum sector interleave and do that for you, so you end up with the fastest possible speed. I'm pretty sure SpinRite would do that. I know I had some utility that would figure that out for you.

  • (Score: 3, Informative) by jman on Saturday February 10, @01:34PM

    by jman (6085) Subscriber Badge on Saturday February 10, @01:34PM (#1343847) Homepage
    Backups are necessary, but don't have to be a PITA. An optical solution can certainly be made to work, but I'm lazy, and prefer a more "set and forget" strategy, combined with occasionally checking the media to ensure it's still intact.

    It all depends on how much you value your data.

    Versioning backups are a must. Depending on your level of paranoia, you might want a home backup, plus at least one off-site. For the first run of the off-site, if you can, physically take a copy to the other location; otherwise, be prepared to wait quite a while for all those bits to upload (especially if you're not on at least 1G fiber up at the house). Once you have a good copy elsewhere, updating the new/changed data won't take nearly as long.

    I don't prefer "cloud" solutions (though really that's all an off-site backup is) as then the bits are no longer under my control.

    MTBF is difficult to calculate with just one of whatever media you're using. I still have spinning rust discs from 20 years ago that work when plugged in, and CD's even older that are readable.

    Current costs (8TB backup capacity):
    • 4TB (Western Digital "Red NAS" 5400 RPM) => $100 * 3 = $300, no swapping discs
    • 4TB (Western Digital "Red NAS" 7200 RPM) => $150 * 3 = $450, no swapping discs
    • 4TB (Samsung Evo SSD) => $300 * 3 = $900, no swapping discs
    • 32GB (50-pack of Verbatim Blueray discs) => $40 * 15 = $600, mucho swapping

    Of course, that's just the storage media - there's still the rest of the box - but either way you need multiple sets of your media to account for potential failure. With BD each one is completely separate, so to be safe you'd need multiple copies of what it would take to make a single backup. With SSD or spinning rust they're often tied together as a single volume which can survive the loss of any one of them. In the event of catastrophe, replace the bad one and be back on your way with no downtime.

    One option would be to build a Truenas box, populated with ECC memory (you'll want that as Truenas uses ZFS), and either SSD or spinning rust drives in a RaidZ configuration, so a minimum of three discs. The more discs in the set, the greater the reliability.

    My current NAS has an NVME for the OS (imaged with current configs onto separate media for when it eventually dies), and 3 WD 16TB spinning rust drives (they were closer to $600 each when I bought them a couple of years ago, but SSD just doesn't yet have the capacity, and I was thinking ahead) providing a little under 35TB of de-duplicated storage, of which about 1/4 is curently used. (Besides all the ripped audio CD's, movie DVD's, etc., gots to be lots of junk there, but I tend not to throw data away.)

    Memory-wise, it has 128GB ECC, more than needed, around 1/3 free according to the dashboard, but that fluctuates depending on what the system is doing. Reason for the extra memory is ZFS tries to cache as much as possible into memory, which is why you want the error-correcting kind (most consumer desktop/laptop devices do not use ECC). Also, Truenas - these days based on Debian - doesn't like you fiddling with it, so uses Docker images for add-ons such as running Apache or a media server. Those take memory. (The OS is very opinionated about what you can add into it, but I have to say that's not completely a bad thing. Was upset at first when I saw it didn't include the venerable "screen" as I'd been used to using for years, but tmux works just fine.)

    It's not really that expensive to build your own home server - and can be a great learning experience - but ECC complicates things as it may be harder to find an inexpensive motherboard that supports it. While my NAS was built with all new parts, I inherited a 15 year old Dell T410 whose power supply died, wherein the shop that had it just threw the thing away. Dual-Xeon, 64GB memory, 6 drive bays, a real workhorse. If I ever get around to replacing the PSU it'd make a great - if a tad noisy - utility box. Old, but Dell is pretty good about server hardware. No telling how long it'd last. So, used is an option, especially if you can get two of those cheap motherboards in case the production one dies.

    In closing, while Blueray has a pretty long MTBF, it's relatively small storage capacity makes for a much more difficult backup stategy. It also doesn't allow for anything like RAID, so in practice, depending on how much you care for your data, you're forced to having multiple backups of the same info on different discs.

    Because of that, where BD really fails when used for backup is in the amount of human labor involved. Those discs don't swap themselves out, and your time is also a cost. With an always-on array of spinning or SSD storage, you can cron the backups from your various devices, and not have to waste time swapping discs.

    My Mac's have volumes on the NAS for time machine. The 'doze and 'nix boxes also have volumes of their own, with 'nix using rsnapshot (stable, but by no means dead). The 'doze and 'nix OS's themselves are imaged in case their drives die, just like the NAS OS.

    Lastly, as said from the start, backups are only as good as the last time they were tested. Whether you go with an optical solution, or some form of electronic drive, make sure to at least occasionally test the media and run sample restores, so that when disaster strikes, you'll be confident your data can be recovered.

    And if you're really paranoid about data loss, use an additional tape backup for long term storage of what doesn't change (all those CD's, DVD's, email and docs older than a year, etc.). If you look around, you can probably find an old Enterprise grade drive fairly cheap.

    HTH-

  • (Score: 3, Insightful) by mcgrew on Saturday February 10, @09:50PM (2 children)

    by mcgrew (701) <publish@mcgrewbooks.com> on Saturday February 10, @09:50PM (#1343900) Homepage Journal

    Storage is damned near free these days, I have two half terabyte thumb drives that together cost ten bucks. One is holding almost ten thousand MP3s to listen to in the car (I've been collecting music since the 1960s). That drive is backed up on my four Tb network drive, that I back up with an 8 Tb drive that's only powered up when I'm backing up or restoring.

    Back when I worked for Illinois the state government didn't have as much storage as I do now!

    Fools use "the cloud".

    --
    mcgrewbooks.com mcgrew.info nooze.org
(1)