Don't complain about lack of options. You've got to pick a few when you do multiple choice. Those are the breaks.
Feel free to suggest poll ideas if you're feeling creative. I'd strongly suggest reading the past polls first.
This whole thing is wildly inaccurate. Rounding errors, ballot stuffers, dynamic IPs, firewalls. If you're using these numbers to do anything important, you're insane.
I use multiple external USB spinning rust disks in groups of three, with two on-site and one off-site (an extension/modification of the 3-2-1 rule)
I want to move to using a NAS solution, but have not yet found a NAS that runs FLOSS software, consumes little power, and is within my budget. I would have bought some Kobol Helios64s [kobol.io] if the team making them had not decided to stop.
I was also tempted by the GNUBee [gnubee.org] offerings, but it turns out that they have/had patched kernels that were not supported in mainline.
Also looked at Hardkernel ODROID
My ideal is non-x86 processor, low power draw (because it is going to be on permanently), no fan (I hate fan noise, and fans mean heat, which mean high power draw), and Gigabit Ethernet - and some form of case.
I have an Excito Bubba2 'B2' [archive.org], which was pretty excellent, but is a 32-bit processor, and a bit long in the tooth now, I want more of the same, which frankly, don't seem to be available now.
HARDWARE
Internal hard drive up to 2 TB SATA Internal memory
256 MB DDR2 Processor
333 MHz Power PC Network connectivity
2 x 1000 Mbit/s USB 2.0
2 x 480 Mbit/s eSATA
Yes, x2 Power consumption* 7-12W (disk dependent) Size 11.5 x 4.5 x 18.5 cm 4.5 x 1.8 x 7 inches Kensington lock slot
Yes Fan
No!
So far, for me, spinning rust has turned out to be more reliable for long-term unpowered storage than flash.
I don't have good insight on how to improve data retention on flash. The sledgehammer-to-crack-a-nut approach is 'simply' to copy to new media regularly enough. Determining what 'regularly enough' is, is non-trivial.
Copying to new media every so often is what you need to do for long-long-term storage on spinning rust as well, anyway.
The trouble is, as flash depends on separation of charges to store data, it is susceptible to charge leakage over time. If I could do something as simple as plug in an SSD and issue a 'refresh' command that simply rewrote all the data internally, and potentially give me a health report, then I'd do that. For all I know, SSDs could do this automatically and transparently all the time they are powered up - it would make a kind of sense. Note I think is unlikely due to wearing out the flash more quickly - then again, if the drive is rated for so many drive writes per day, adding one per month or so, then everything would not be so bad.
I should probably join some data-hoarders' forum to get more insight on this kind of stuff. Any suggestions?
We have a few thousand devices in the field with 120GB SSDs. The ones that sit around (600 hours of use, 400 power cycles, in the past 4-7 years) have failing SSDs, while it looks like the ones that are more heavily used (4000 hours, 3000 power cycles) are still doing well.
My home SSDs are online 24-7-365, I think one is pushing 8 years old?, they (and the rust) survived a major lightning strike that took out many connected devices. YMMV.
Well here's another anecdote. I had a 120GB SSD in one of these mini PCs running 24/7 so only power cycled when the power failed but I did have a cron job running to download from the internet during the night (rsync to the same place on the disk every night). It wore out after about four years. The symptoms were the ssh keys suddenly not working. I thought I'd been hacked but it was the SSD failing.
Without going down the tape route which I believe most of the big boys use. HDD is "the backup solution". Sure, you can use SSDs for speed/convenience. However, if a SSD dies, it is well and truly dead. You likely have no way of ever recovering anything from that device. However, if an HDD dies, there are labs that you can send that sucker to, if you had irreplaceable data on it.
-- Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
We did triple redundant backup to tape, and still had some cases of triple failure. Eventually the protocol had to become: transmit (mail) the backup on tape then confirm successful recovery from tape to the central storage location before wiping the original collection device. The assumption was: once two backups had been successfully restored, at least one of them would most likely successfully restore again in the future. That was usually true, but we had 500 devices in the field generating one backup event per month each.
I think these were 80MB removable hard drives - YUGE for the day. In 3 years of field use, I don't think we had a single HDD failure in the field.
Why a branded solution? I use a regular Linux SBC (odroid) with SATA drives. Backups using NFS/ssh (borg). I do spin down the disks acting as mirrors (using rsync, can't recommend RAID) by turning off the SMART demon during periods of inactivity (most of the day). When idling, the box draws ~7W while serving also various other services. I still trust in spinning rust. I have seen SSDs fail but HDDs have been find after 10 years. I recently was forced on Helium filled drives. I have no idea about the long term viability of those and might reconsider the SSD/HDD trade-offs in the future.
RAID 5 at home with a spare drive at the ready. Then, weekly, I have a scripted rsync tunneled through ssh to an RPI 3 at my parents' home in another county. They don't have a static IP, but I have a DDNS script run on their router to update a subdomain through my registrar, so I'm able to find them at myparents.mydomain.com.
My external drives are permanently attached to network attached computers - different ones in different rooms. I wish they were in different buildings but I just haven't setup a backup system out of the main house yet.
They are too large to swallow. They are too large to even fit in your mouth.
Swallowing a drive might seem to afford it greater protection from theft, however the drive could incur damage moving through the human digestive tract and then also during excretion when it is time for the next backup. It is better to simply put them into a locked desk drawer. Or even better a bank safety deposit box.
-- There can be only one cable TV Network: USABCNNBCBSyFy
I guess it was a bit too out there to expect anyone to get the whole "red/blue" (pill/drive) reference. You know man. Do you want to stay in the Matrix or leave?
-- Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
I did get the reference. But I had never thought of the red/blue pill thing when I first bought those backup drives and selected a bright red and bright blue one.
-- There can be only one cable TV Network: USABCNNBCBSyFy
(Score: 3, Insightful) by Anonymous Coward on Thursday January 16, @11:46PM
by Anonymous Coward
on Thursday January 16, @11:46PM (#1389161)
Disagree. There are classes to backups that protect different risks and have different restoration scenarios. Onsite and online, onsite and offline, offsite and online, offsite and offline. They have different pros and cons and different risks associated with them. The correct backup solution is to have a mix that addresses the risks and meets the trade offs you are willing to make. Sometimes locality is critical, so onsite is critical. Sometimes complete loss is critical, so offsite is critical. Sometimes latency is critical, so online is critical. Sometimes isolation is critical, so offsite is critical. Sometimes you have a mix of risks and a mix is appropriate.
For example, we once had a lightning arrestor fail after the nearby transformer took a direct hit. In addition to requiring us to replace a bunch of electrical infrastructure, it took out multiple rows of cabinets. However, a mix of our onsite and online along with the onsite but offline backup allowed us to restore quickly. It meant that after failing our power over, we were back to 100% operation (albeit in degraded mode due to the lack of redundancy) in less than a half an hour after the lightning strike. Having to ship physical drives or even download all the data from offsite would have taken much longer than that and cost much more to boot.
As the Anonymous poster pointed out. A backup doesn't need to be offsite. However, as part of a "Good Backup Plan" having an offsite backup is very much necessary. Unfortunately a lot of places have just pushed that to the cloud. I posit that, if you don't have control of the server/hardware, you don't have a reasonable offsite backup.
-- Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
Important files go to my NAS which in ZFS RAID z2 configuration with hot and cold spares. This in turn backups to Crashplan cloud service. I've had to go back to the cloud once when a lightening power surge fried all the spinning drives in my NAS at once. Since added a UPS into the mix to hopefully prevent that.
MX linux has MX Snapshot, which creates a complete copy of your system which can also can be used to reinstall THAT system back onto your box/lappy.
The only problem i have with MX linux is that setting up BTRFS and Snapper is a BEYOTCH. I'm hoping that future versions will allow automatic setup and install on BTRFS, with automatic setup of Snapper...then i would be completely Super Funtime Happy Happy.
Timeshift is alright, but it's nice to just boot into a different snapshot and keep going as if nothing had gone wrong.
-- ---
Please remind me if I haven't been civil to you: I'm channeling MDC.
---Gaaark 2.0
---
I'm not sure I would consider "NAS" to be a backup, so it feels a bit odd to be given as an option.
I guess you can say I back up my PCs to my NAS (monthly cronjob based incremental rsync), keeping the last 5 backup archives per PC. Then there is the two week rolling daily zfs snapshots in the background just in case I fat-finger a "rm" at some point or delete something I realise I still needed.
NAS itself runs FreeBSD with raidZ2 for the archive disks (4x10TB), with two cold spares. Every 6 months a script does zfs snapshot backups to two 10TB drives in a drive caddy, which I then keep offsite.
Its worked well enough for me. In over 15 years I've not lost a single bit of data to corruption, disk failure or accidental deletion (I've had all three happen multiple times, but was able to recover).
That includes through multiple array growth events (when I replaced all the disks and grew the array online) and a particularly bad two years when I kept getting random drive disconnects and ZFS array suspension (finally traced to the HBA card manufacturer) multiple times a day. I am seriously impressed with FreeBSD and ZFS so far, so can recommend them.
I would not mind a magnetic media backup as well, but any tape backup that provides 10TB+ is far too expensive compared to just having disk back-ups, so for now magnetic media is not for the home/soho environment (although I remember they used to make home/soho magnetic backup producs, like the Travan tape systems, but no modern equivalent exists anymore)
Assuming you have a reasonable RAID setup for your NAS, I would count it for a random person's stuff. Professionally, that's definitely not a backup! I would like to setup a lower power NAS with cold storage backups, but that would require quite a bit of effort and investment. Both of which are definitely finite in my household.
-- Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
There's better solutions, I guess, but this one required the least effort on my part. No important files are kept directly on my workstation, so the NAS is kinda the primary and the backup that way.
Anyway I have files from like 1996 on that NAS, so multiple spinny disks just kinda work.
CD/DVD/HDD are typically the things I make backups to. However, CD/DVD isn't the best way. From what I understand HDD is pretty much "the way to backup data" for us plebs. Though, I guess tape is still huge. I've never actually run or managed a system with tape storage (discounting VHS and Audio Cassettes). Generally, the only things I really care about are photos and those are backed up to cold disks, if and/or when I have the time. That said, I do have backups of other things. However, typically it's more for convenience and in case I forgot that X thing actually exists. As opposed to needing X thing. Games can generally be re-installed, up-to-date software (or old software) can be re-installed, etc. There are a few personal projects I would be sad to lose, but those are on multiple devices with cloud backup. Probably more resilient than my cold storage as I don't have multiple locations for that cold storage.
-- Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 0) by Anonymous Coward on Tuesday January 21, @05:35PM
(4 children)
by Anonymous Coward
on Tuesday January 21, @05:35PM (#1389704)
I'm surprised there is not more love for optical media backup. I've lost data on floppies, hard-disks, flash memory devices and magnetic tape. Hence I have no confidence in using them as backup media.
Once I learned that M-DISC [wikipedia.org] laid claim to 1000 year longevity on DVD, I bought a USB DVD burner and a spindle full of blanks, and use that to archive anything I think should outlive me. I even use this optical device to boot into the live distro I use for all my computing tasks these days. Reboots are only needed every year or so (when the dog accidentally kicks loose the power chord). Otherwise the hi-jacked chromebook stays on 24/7, sipping something like 2 watts when idle.
Indeed I boot my live distro with the "toram" option so that all my filesystems are in RAM, as nothing else can be trusted. I do use flash to store files that need to survive such crashes, but none of these would ruin my life if lost.
Beyond this, anything that I think is really important I post to SN, and hope that it will not get lost in the noise.
(Score: 0) by Anonymous Coward on Wednesday January 22, @12:39AM
by Anonymous Coward
on Wednesday January 22, @12:39AM (#1389749)
The problem is that older optical systems were notorious for losing data. Early discs would lose data over the course of a year or two. Once those immediate issues were worked out, the problem has slowed. However, disc rot has remained a problem for decades in optical media. This even goes for professionally pressed discs from major manufacturers. Burned discs are even worse in their longevity estimates, especially when you look at cheap organic discs most people bought. Optical discs have gotten better, especially the expensive ones, but many people are still dubious when it comes to trusting them.
(Score: 1) by pTamok on Thursday January 16, @09:49AM (8 children)
I use multiple external USB spinning rust disks in groups of three, with two on-site and one off-site (an extension/modification of the 3-2-1 rule)
I want to move to using a NAS solution, but have not yet found a NAS that runs FLOSS software, consumes little power, and is within my budget. I would have bought some Kobol Helios64s [kobol.io] if the team making them had not decided to stop.
I was also tempted by the GNUBee [gnubee.org] offerings, but it turns out that they have/had patched kernels that were not supported in mainline.
Also looked at Hardkernel ODROID
My ideal is non-x86 processor, low power draw (because it is going to be on permanently), no fan (I hate fan noise, and fans mean heat, which mean high power draw), and Gigabit Ethernet - and some form of case.
I have an Excito Bubba2 'B2' [archive.org], which was pretty excellent, but is a 32-bit processor, and a bit long in the tooth now, I want more of the same, which frankly, don't seem to be available now.
(Score: 2) by JoeMerchant on Friday January 17, @07:17PM (6 children)
I used to spin external rust, but I am transitioning to SSD. I now have one old soldier spinning, vs two SSDs.
🌻🌻 [google.com]
(Score: 4, Interesting) by pTamok on Saturday January 18, @08:47AM (5 children)
Anecdote warning!
So far, for me, spinning rust has turned out to be more reliable for long-term unpowered storage than flash.
I don't have good insight on how to improve data retention on flash. The sledgehammer-to-crack-a-nut approach is 'simply' to copy to new media regularly enough. Determining what 'regularly enough' is, is non-trivial.
Copying to new media every so often is what you need to do for long-long-term storage on spinning rust as well, anyway.
The trouble is, as flash depends on separation of charges to store data, it is susceptible to charge leakage over time. If I could do something as simple as plug in an SSD and issue a 'refresh' command that simply rewrote all the data internally, and potentially give me a health report, then I'd do that. For all I know, SSDs could do this automatically and transparently all the time they are powered up - it would make a kind of sense. Note I think is unlikely due to wearing out the flash more quickly - then again, if the drive is rated for so many drive writes per day, adding one per month or so, then everything would not be so bad.
I should probably join some data-hoarders' forum to get more insight on this kind of stuff. Any suggestions?
(Score: 3, Interesting) by JoeMerchant on Saturday January 18, @04:02PM (2 children)
Anecdote 2:
We have a few thousand devices in the field with 120GB SSDs. The ones that sit around (600 hours of use, 400 power cycles, in the past 4-7 years) have failing SSDs, while it looks like the ones that are more heavily used (4000 hours, 3000 power cycles) are still doing well.
My home SSDs are online 24-7-365, I think one is pushing 8 years old?, they (and the rust) survived a major lightning strike that took out many connected devices. YMMV.
🌻🌻 [google.com]
(Score: 2) by turgid on Tuesday January 21, @08:58PM (1 child)
Well here's another anecdote. I had a 120GB SSD in one of these mini PCs running 24/7 so only power cycled when the power failed but I did have a cron job running to download from the internet during the night (rsync to the same place on the disk every night). It wore out after about four years. The symptoms were the ssh keys suddenly not working. I thought I'd been hacked but it was the SSD failing.
I refuse to engage in a battle of wits with an unarmed opponent [wikipedia.org].
(Score: 2) by JoeMerchant on Wednesday January 22, @12:22AM
I think saying SSD is a lot like saying "red wine." There are all kinds of varieties, some age better than others.
🌻🌻 [google.com]
(Score: 3, Insightful) by Freeman on Tuesday January 21, @04:15PM (1 child)
Without going down the tape route which I believe most of the big boys use. HDD is "the backup solution". Sure, you can use SSDs for speed/convenience. However, if a SSD dies, it is well and truly dead. You likely have no way of ever recovering anything from that device. However, if an HDD dies, there are labs that you can send that sucker to, if you had irreplaceable data on it.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Interesting) by JoeMerchant on Wednesday January 22, @07:21PM
We went tape in 1994. It sucked.
We did triple redundant backup to tape, and still had some cases of triple failure. Eventually the protocol had to become: transmit (mail) the backup on tape then confirm successful recovery from tape to the central storage location before wiping the original collection device. The assumption was: once two backups had been successfully restored, at least one of them would most likely successfully restore again in the future. That was usually true, but we had 500 devices in the field generating one backup event per month each.
I think these were 80MB removable hard drives - YUGE for the day. In 3 years of field use, I don't think we had a single HDD failure in the field.
🌻🌻 [google.com]
(Score: 2, Insightful) by shrewdsheep on Monday January 20, @11:14AM
Why a branded solution? I use a regular Linux SBC (odroid) with SATA drives. Backups using NFS/ssh (borg). I do spin down the disks acting as mirrors (using rsync, can't recommend RAID) by turning off the SMART demon during periods of inactivity (most of the day). When idling, the box draws ~7W while serving also various other services. I still trust in spinning rust. I have seen SSDs fail but HDDs have been find after 10 years. I recently was forced on Helium filled drives. I have no idea about the long term viability of those and might reconsider the SSD/HDD trade-offs in the future.
(Score: 2, Informative) by ichthus on Thursday January 16, @04:03PM (2 children)
RAID 5 at home with a spare drive at the ready. Then, weekly, I have a scripted rsync tunneled through ssh to an RPI 3 at my parents' home in another county. They don't have a static IP, but I have a DDNS script run on their router to update a subdomain through my registrar, so I'm able to find them at myparents.mydomain.com.
(Score: 0) by Anonymous Coward on Thursday January 16, @11:17PM (1 child)
Better make sure you patched the rsync vulnerability!
(Score: 1) by ichthus on Friday January 17, @01:51PM
Thanks for the reminder. I should be safe either way, though, since I tunnel rsync through an ssh session.
(Score: 2) by DannyB on Thursday January 16, @05:55PM (5 children)
Red drive. Blue drive.
Each time I do a backup, I alternate onto either red or blue so that I have two backups. Today's backup, and the previous backup.
A good followup poll question would be: what do you do with your external backup drives?
There can be only one cable TV Network: USABCNNBCBSyFy
(Score: 2) by JoeMerchant on Friday January 17, @07:18PM
My external drives are permanently attached to network attached computers - different ones in different rooms. I wish they were in different buildings but I just haven't setup a backup system out of the main house yet.
🌻🌻 [google.com]
(Score: 2) by Freeman on Tuesday January 21, @04:09PM (3 children)
Inquiring minds would like to know. Which one do You swallow? The Red one or the Blue one?
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Funny) by DannyB on Tuesday January 21, @10:41PM (2 children)
They are too large to swallow. They are too large to even fit in your mouth.
Swallowing a drive might seem to afford it greater protection from theft, however the drive could incur damage moving through the human digestive tract and then also during excretion when it is time for the next backup. It is better to simply put them into a locked desk drawer. Or even better a bank safety deposit box.
There can be only one cable TV Network: USABCNNBCBSyFy
(Score: 2) by Freeman on Friday January 24, @02:25PM (1 child)
I guess it was a bit too out there to expect anyone to get the whole "red/blue" (pill/drive) reference. You know man. Do you want to stay in the Matrix or leave?
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 3, Touché) by DannyB on Friday January 24, @03:56PM
I did get the reference. But I had never thought of the red/blue pill thing when I first bought those backup drives and selected a bright red and bright blue one.
There can be only one cable TV Network: USABCNNBCBSyFy
(Score: 2) by Dr Spin on Thursday January 16, @06:00PM
... (LTO) Tape backup in Grandfather, Father Son sequence,
with backups in multiple physically distant locations.
Anything else is fool's gold.
Warning: Opening your mouth may invalidate your brain!
(Score: 2) by https on Thursday January 16, @06:24PM (2 children)
If it's at home, it's not offsite and therefore not a backup.
Offended and laughing about it.
(Score: 3, Insightful) by Anonymous Coward on Thursday January 16, @11:46PM
Disagree. There are classes to backups that protect different risks and have different restoration scenarios. Onsite and online, onsite and offline, offsite and online, offsite and offline. They have different pros and cons and different risks associated with them. The correct backup solution is to have a mix that addresses the risks and meets the trade offs you are willing to make. Sometimes locality is critical, so onsite is critical. Sometimes complete loss is critical, so offsite is critical. Sometimes latency is critical, so online is critical. Sometimes isolation is critical, so offsite is critical. Sometimes you have a mix of risks and a mix is appropriate.
For example, we once had a lightning arrestor fail after the nearby transformer took a direct hit. In addition to requiring us to replace a bunch of electrical infrastructure, it took out multiple rows of cabinets. However, a mix of our onsite and online along with the onsite but offline backup allowed us to restore quickly. It meant that after failing our power over, we were back to 100% operation (albeit in degraded mode due to the lack of redundancy) in less than a half an hour after the lightning strike. Having to ship physical drives or even download all the data from offsite would have taken much longer than that and cost much more to boot.
(Score: 2) by Freeman on Tuesday January 21, @04:07PM
As the Anonymous poster pointed out. A backup doesn't need to be offsite. However, as part of a "Good Backup Plan" having an offsite backup is very much necessary. Unfortunately a lot of places have just pushed that to the cloud. I posit that, if you don't have control of the server/hardware, you don't have a reasonable offsite backup.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 1) by shellsterdude on Thursday January 16, @06:58PM
Important files go to my NAS which in ZFS RAID z2 configuration with hot and cold spares. This in turn backups to Crashplan cloud service.
I've had to go back to the cloud once when a lightening power surge fried all the spinning drives in my NAS at once. Since added a UPS into the mix to hopefully prevent that.
(Score: 2) by Gaaark on Thursday January 16, @07:26PM
External HD and USB key:
MX linux has MX Snapshot, which creates a complete copy of your system which can also can be used to reinstall THAT system back onto your box/lappy.
The only problem i have with MX linux is that setting up BTRFS and Snapper is a BEYOTCH. I'm hoping that future versions will allow automatic setup and install on BTRFS, with automatic setup of Snapper...then i would be completely Super Funtime Happy Happy.
Timeshift is alright, but it's nice to just boot into a different snapshot and keep going as if nothing had gone wrong.
--- Please remind me if I haven't been civil to you: I'm channeling MDC. ---Gaaark 2.0 ---
(Score: 3, Insightful) by drussell on Thursday January 16, @10:55PM (1 child)
Neither of my two main backup / archival media are listed in the poll, neither of the main traditional ferrous / magnetic medias...
For important stuff, especially data that changes, it is all still magnetic tape and spinning rust style hard-disks.
After that, I suppose it would then be USB flash storage and optical disc media at about a 50-50% tie for secondary use or less critical data...
(Score: 2) by Freeman on Tuesday January 21, @04:03PM
I think they meant for "USB Disk" to be HDD or SSD via USB, but it's definitely not clear.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by Unixnut on Friday January 17, @12:59AM (1 child)
I'm not sure I would consider "NAS" to be a backup, so it feels a bit odd to be given as an option.
I guess you can say I back up my PCs to my NAS (monthly cronjob based incremental rsync), keeping the last 5 backup archives per PC. Then there is the two week rolling daily zfs snapshots in the background just in case I fat-finger a "rm" at some point or delete something I realise I still needed.
NAS itself runs FreeBSD with raidZ2 for the archive disks (4x10TB), with two cold spares. Every 6 months a script does zfs snapshot backups to two 10TB drives in a drive caddy, which I then keep offsite.
Its worked well enough for me. In over 15 years I've not lost a single bit of data to corruption, disk failure or accidental deletion (I've had all three happen multiple times, but was able to recover).
That includes through multiple array growth events (when I replaced all the disks and grew the array online) and a particularly bad two years when I kept getting random drive disconnects and ZFS array suspension (finally traced to the HBA card manufacturer) multiple times a day. I am seriously impressed with FreeBSD and ZFS so far, so can recommend them.
I would not mind a magnetic media backup as well, but any tape backup that provides 10TB+ is far too expensive compared to just having disk back-ups, so for now magnetic media is not for the home/soho environment (although I remember they used to make home/soho magnetic backup producs, like the Travan tape systems, but no modern equivalent exists anymore)
(Score: 2) by Freeman on Tuesday January 21, @04:00PM
Assuming you have a reasonable RAID setup for your NAS, I would count it for a random person's stuff. Professionally, that's definitely not a backup! I would like to setup a lower power NAS with cold storage backups, but that would require quite a bit of effort and investment. Both of which are definitely finite in my household.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 2) by KritonK on Friday January 17, @05:39AM (1 child)
I just rsync my SSD partitions to an internal hard disk.
(Score: 1) by shrewdsheep on Monday January 20, @11:23AM
It only counts as backup if you add the --delete --backup --backup-dir mybackupfolders/`date +'%Y%M%d'` options.
(Score: 2) by fliptop on Friday January 17, @02:26PM (1 child)
Two backup servers, one at my collocation NOC, sync'd to a second in my basement.
Our Constitution was made only for a moral and religious people. It is wholly inadequate to the government of any other.
(Score: 2) by HeadlineEditor on Sunday January 19, @09:51AM (1 child)
There's better solutions, I guess, but this one required the least effort on my part. No important files are kept directly on my workstation, so the NAS is kinda the primary and the backup that way.
Anyway I have files from like 1996 on that NAS, so multiple spinny disks just kinda work.
(Score: 3, Funny) by Freeman on Tuesday January 21, @03:55PM
Wait, I'm not supposed to be using Floppy disks as the main storage device for my 90s data?!?!? Better break out the 'ol USB Floppy Drive!
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 4, Funny) by Hartree on Monday January 20, @06:04AM
Do you know how hard it's become to find clay tablets pre-formatted for cuneiform?
(Score: 2) by Freeman on Tuesday January 21, @03:53PM (5 children)
CD/DVD/HDD are typically the things I make backups to. However, CD/DVD isn't the best way. From what I understand HDD is pretty much "the way to backup data" for us plebs. Though, I guess tape is still huge. I've never actually run or managed a system with tape storage (discounting VHS and Audio Cassettes). Generally, the only things I really care about are photos and those are backed up to cold disks, if and/or when I have the time. That said, I do have backups of other things. However, typically it's more for convenience and in case I forgot that X thing actually exists. As opposed to needing X thing. Games can generally be re-installed, up-to-date software (or old software) can be re-installed, etc. There are a few personal projects I would be sad to lose, but those are on multiple devices with cloud backup. Probably more resilient than my cold storage as I don't have multiple locations for that cold storage.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 0) by Anonymous Coward on Tuesday January 21, @05:35PM (4 children)
I'm surprised there is not more love for optical media backup. I've lost data on floppies, hard-disks, flash memory devices and magnetic tape. Hence I have no confidence in using them as backup media.
Once I learned that M-DISC [wikipedia.org] laid claim to 1000 year longevity on DVD, I bought a USB DVD burner and a spindle full of blanks, and use that to archive anything I think should outlive me. I even use this optical device to boot into the live distro I use for all my computing tasks these days. Reboots are only needed every year or so (when the dog accidentally kicks loose the power chord). Otherwise the hi-jacked chromebook stays on 24/7, sipping something like 2 watts when idle.
Indeed I boot my live distro with the "toram" option so that all my filesystems are in RAM, as nothing else can be trusted. I do use flash to store files that need to survive such crashes, but none of these would ruin my life if lost.
Beyond this, anything that I think is really important I post to SN, and hope that it will not get lost in the noise.
(Score: 2) by Freeman on Tuesday January 21, @07:01PM (2 children)
Unfortunately, it looks like M-DISCs will outlive all optical drives.
Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
(Score: 0) by Anonymous Coward on Tuesday January 21, @07:37PM
Sure, but in 200 years, M-DISC tech will seem as primitive as clay tablets seem to us.
(Score: 0) by Anonymous Coward on Wednesday January 22, @01:30AM
Drives? Where we're going we won't need no drives! [nature.com]
(Score: 0) by Anonymous Coward on Wednesday January 22, @12:39AM
The problem is that older optical systems were notorious for losing data. Early discs would lose data over the course of a year or two. Once those immediate issues were worked out, the problem has slowed. However, disc rot has remained a problem for decades in optical media. This even goes for professionally pressed discs from major manufacturers. Burned discs are even worse in their longevity estimates, especially when you look at cheap organic discs most people bought. Optical discs have gotten better, especially the expensive ones, but many people are still dubious when it comes to trusting them.