Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Friday February 03 2017, @06:39AM   Printer-friendly
from the 5-backup-strategies-weren't-enough dept.

Ruby Paulson at BlogVault reports

GitLab, the online tech hub, is facing issues as a result of an accidental database deletion that happened in the wee hours of last night. A tired, frustrated system administrator thought that deleting a database would solve the lag-related issues that had cropped up... only to discover too late that he'd executed the command for the wrong database.

[...] It's certainly freaky that all the five backup solutions that GitLab had were ineffective, but this incident demonstrates that a number of things can go wrong with backups. The real aim for any backup solution, is to be able to restore data with ease... but simple oversights could render backup solutions useless.

Computer Business Review adds

The data loss took place when a system administrator accidentally deleted a directory on the wrong server during a database replication process. A folder containing 300GB of live production data was completely wiped.

[...] The last potentially useful backup was taken six hours before the issue occurred.

However, this is not seen to be of any help as snapshots are normally taken every 24 hours and the data loss occurred six hours after the previous snapshot which [resulted in] six hours of data loss.

David Mytton, founder and CEO [of] Server Density, said: "This unfortunate incident at GitLab highlights the urgent need for businesses to review and refresh their backup and incident handling processes to ensure data loss is recoverable, and teams know how to handle the procedure.

GitLab has been updating a Google Doc with info on the ongoing incident.

Additional coverage at:
TechCrunch
The Register


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Scruffy Beard 2 on Friday February 03 2017, @08:24AM

    by Scruffy Beard 2 (6030) on Friday February 03 2017, @08:24AM (#462279)

    I think the back-ups worked properly.

    If it is live, it is not a backup (because the changes would then be replicated on the "backups").

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 2) by bob_super on Friday February 03 2017, @06:28PM

    by bob_super (1357) on Friday February 03 2017, @06:28PM (#462521)

    > I think the back-ups worked properly.

    Apparently, the server behind the couch worked properly, after all the official backup systems proved to be corrupted or misconfigured.

    Reminds me of the almost identical story I heard, I believe about Toy Story:
      - Someone does a bad rm -rf command, nuking the whole movie and models.
      - Backups are all useless. Pixar panics as they may have just lost 3 years of work.
      - Some lady with a newborn points out that she copied the whole database to work from home (wouldn't fit anymore these days).
      - They end up driving her desktop computer back to the building praying the whole way that the hard drive doesn't decide today's a good day to die (why they didn't duplicate before transport instead wasn't in the interview).