Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Tuesday February 06 2024, @03:51AM   Printer-friendly
from the confidentiality-integrity-and-availability dept.

Exotic Silicon has a detailed exploration of how and why to make long term backups.

The myth...

When thinking about data backup, many people have tended to fixate on the possibility of a crashed hard disk, and in modern times, a totally dead SSD. It's been the classic disaster scenario for decades, assuming that your office doesn't burn down overnight. You sit down in front of your desktop in the morning, and it won't boot. As you reach in to fiddle with SATA cables and clean connections, you realise that the disk isn't even spinning up.

Maybe you knew enough to try a couple of short, sharp, ninety degree twists in the plane of the platters, in case it was caused by stiction. But sooner or later, reality dawns, and it becomes clear that the disk will never spin again. It, along with your data, is gone forever. So a couple of full back-ups at regular intervals should suffice, right?

Except that isn't how it usually happens - most likely you'll be calling on your backups for some other reason.

The reality...

Aside from the fact that when modern SSDs fail they often remain readable, I.E. they become read-only, your data is much more likely to be at risk from silent corruption over time or overwritten due to operator error.

Silent corruption can happen for reasons ranging from bad SATA cables and buggy SSD firmware, to malware and more. Operator error might go genuinely un-noticed, or be covered up.

Both of these scenarios can be protected against with an adequate backup strategy, but the simple approach of a regular, full backup, (which also often goes untested), in many cases just won't suffice.

Aspects like the time interval between backups, how many copies to have and how long to keep them, speed of recovery, and the confidentiality and integrity of said backups are all addressed. Also covered are silent corruption, archiving unchanging data, examples of comprehensive backup plans, and how to correctly store, label, and handle the backup storage media.

Not all storage media have long life spans.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by canopic jug on Tuesday February 06 2024, @01:08PM (1 child)

    by canopic jug (3949) Subscriber Badge on Tuesday February 06 2024, @01:08PM (#1343323) Journal

    Any further questions feel free to reply or contact us directly.

    Below, ntropia beat me to the question about other file systems [soylentnews.org] like OpenZFS or BtrFS. Those can do file-level checksums. How would they fit in with removable media?

    --
    Money is not free speech. Elections should not be auctions.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2, Interesting) by Crystal on Tuesday February 06 2024, @06:49PM

    by Crystal (28042) on Tuesday February 06 2024, @06:49PM (#1343351)

    Below, ntropia beat me to the question about other file systems [soylentnews.org] like OpenZFS or BtrFS. Those can do file-level checksums. How would they fit in with removable media?

    For backup or archiving to removable media, the data is usually being written once and then kept unchanged until the media is wiped and re-used rather than being continuously 'in flux', with individual files being updated. So although you could use a filesystem with integrated file-level checksumming, you are trading increased complexity at the filesystem level for little gain over what you could achieve by simply doing a sha256 over the files before writing them.