Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Friday March 17 2017, @05:14AM   Printer-friendly
from the encrypt-for-the-win dept.

How do you destroy an SSD?

First, let's focus on some "dont's." These are tried and true methods used to make sure that your data is unrecoverable from spinning hard disk drives. But these don't carry over to the SSD world.

Degaussing – applying a very strong magnet – has been an accepted method for erasing data off of magnetic media like spinning hard drives for decades. But it doesn't work on SSDs. SSDs don't store data magnetically, so applying a strong magnetic field won't do anything.

Spinning hard drives are also susceptible to physical damage, so some folks take a hammer and nail or even a drill to the hard drive and pound holes through the top. That's an almost surefire way to make sure your data won't be read by anyone else. But inside an SSD chassis that looks like a 2.5-inch hard disk drive is actually just a series of memory chips. Drilling holes into the case may not do much, or may only damage a few of the chips. So that's off the table too.

Erasing free space or reformatting a drive by rewriting it zeroes is an effective way to clear data off on a hard drive, but not so much on an SSD. In fact, in a recent update to its Mac Disk Utility, Apple removed the secure erase feature altogether because they say it isn't necessary. So what's the best way to make sure your data is unrecoverable?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by Soylentbob on Friday March 17 2017, @05:46AM (5 children)

    by Soylentbob (6519) on Friday March 17 2017, @05:46AM (#480234)

    If you trust the vendor, here [thomas-krenn.com] is a tutorial which offers advise how to enable the secure-erase feature of an ssd under Linux (if supported by ssd). Of course this requires you trust the vendor to not secretly just set a flag which could be overwritten by NSA or vendor (on request).

    I don't know why deleting and re-occupying space with zeros doesn't work, but would assume it is some optimization /compression on vendor side? In that case, re-filling with random data should be an option?

    If destruction is an option, opening the drive and smashing the chips with a hammer, or using a torch on them might do the trick.

    But the most important rule would probably be to use an encrypted partition for sensitive data from the start. The snag here is that the key will be stored in an encrypted form on the disk as well, so even when the actual encryption key is strong enough, the passphrase to unlock the key (which has to be entered on each reboot) could be the weakest link.

  • (Score: 1, Informative) by Anonymous Coward on Friday March 17 2017, @05:52AM (3 children)

    by Anonymous Coward on Friday March 17 2017, @05:52AM (#480239)

    > I don't know why deleting and re-occupying space with zeros doesn't work,

    Remapping and sparing means just because you write enough bytes to fill the official capacity does not mean you've overwritten every cell of storage.
    Its probably good enough to prevent the average user with an undelete tool from reading your data, but no good against anyone with a proper set of forensic tools.

    • (Score: 1) by Soylentbob on Friday March 17 2017, @07:32AM (2 children)

      by Soylentbob (6519) on Friday March 17 2017, @07:32AM (#480269)

      Remapping and sparing means just because you write enough bytes to fill the official capacity does not mean you've overwritten every cell of storage.

      That can be done in two ways:

      - Either compression (writing lots of zeros is stored as "here be #num zeros" instead of actually writing the zeros). This should be overcome by writing random-enough numbers.

      - Or by having more storaga capacity on the disk than is available to the user. I can see the point to have some redundant chips in order to compensate for failing ones, but would assume that the excess-capacity is somewhere in the <10% range (I might be wrong, would appreciate to get some actual facts on the topic). If this is to be used to efficiently cache officially deleted files, the disk firmware would have some knowledge on what is file system and what is data. Otherwise, with the second time filling the space with random data, the data from the first run should move to the excess memory and overwrite the previously cached real data. Even if the disk firmware knows the file system I use, latest after encrypting the partition it's game-over.

      If I missed something here or my logic is flawed, I'd be thankful for further information. I'm personally interested in this topic, since my trusted [DELL] Precision M4800 with conventional HDD will be replaced by a Precision 7510 with SSD :-( (Specific configuration is not my choice. As a work-device, it is only available in this configuration.)

      • (Score: 3, Informative) by PocketSizeSUn on Friday March 17 2017, @02:55PM

        by PocketSizeSUn (5340) on Friday March 17 2017, @02:55PM (#480417)

        but would assume that the excess-capacity is somewhere in the < 10% range

        Depends on the model and manufacturer but 10% is on the lower side with higher reliability (enterprise grade) going to 30%+.
        Larger over provisioning pools are needed to maintain performance under high load and fragmentation of the erase blocks.
        Modern SSDs also include basic de-dupe and compression that helps limit the amount of over provision space needed.

        Multiple overwrites of random data followed by a second pass of random data over every other 4k block should do a reasonable job
        of making the SSD unrecoverable ... for all the cells that were still writable.

        Probably a good idea to start the whole mess off with a Secure Erase (if it is supported).

        And of course all this is predicated on the idea that the drive is still supporting writes ... Most likely and SSD will 'die' by going read-only and halting all writes.

        So a physical destruction is probably the only suitable realistic method for an SSD that contains sensitive data.

      • (Score: 0) by Anonymous Coward on Friday March 17 2017, @03:15PM

        by Anonymous Coward on Friday March 17 2017, @03:15PM (#480435)

        - Or by having more storaga capacity on the disk than is available to the user. I can see the point to have some redundant chips in order to compensate for failing ones, but would assume that the excess-capacity is somewhere in the <10% range (I might be wrong, would appreciate to get some actual facts on the topic).

        It's partly to compensate for failures as you suggest, but also makes wear-leveling on a "full" SSD much easier and more efficient to implement. The amount of overprovisioning varies, but 10% is not unlikely (I think it's on the low end.

        If this is to be used to efficiently cache officially deleted files, the disk firmware would have some knowledge on what is file system and what is data. Otherwise, with the second time filling the space with random data, the data from the first run should move to the excess memory and overwrite the previously cached real data.

        I think people are less concerned about a deliberate NSA_UNDELETE feature, and more with the fact that you can't know that overwriting the visible capacity 2, 3, or any finite number of times, will erase all spare blocks.

        Thinking of spare blocks as cache is wrong -- their main function is wear-leveling. Now consider that, at some point in the usage of this drive, 10% of the sectors are "lucky" and rewritten more than the rest, probably because they're storing frequently-rewritten data; those 10% then got marked as spare (but not necessarily erased), and the previously spare ones rotated into use. At this point, you decide to zero (or random-wipe) it -- all in-use sectors start with lower write-counts than the spare sectors, so writing a couple more passes isn't likely to trigger a wear-leveling remap. Someone desolders all the flash chips, reads them, ignores the zeroed (or random) ones, and stores the data from the non-zero/random ones (i.e. the spare ones, that never got zeroed). What do they find? Certainly there's no coherent filesystem, and what fragments can be identified will be old (representing the disk state just before the wear-leveling algorithm swapped out the spare blocks), but that doesn't make them useless -- e.g. /etc/shadow, even though it misses the most recent password change, still has password hashes for other users ready for cracking. And what they won't find is, say, a copy of libreoffice, or glibc -- remember, these are the sectors that are written to the most frequently, so they likely represent relatively unique data, rather than piles of binaries from the OS install CD. (Unless you run gentoo -- then the joke's on them, your spare sectors just contain a bunch of .o files from recompiling everything repeatedly with different USE flags!)

        Obviously, encryption is a good answer, provided you do it from the start; if you've ever stored sensitive data unencrypted on the disk (thus a copy could be residing in spare blocks), encrypting it now doesn't do a great deal of good, as you can never be sure the wear-leveling has actually cycled all blocks through.

  • (Score: 3, Interesting) by Kromagv0 on Friday March 17 2017, @01:00PM

    by Kromagv0 (1825) on Friday March 17 2017, @01:00PM (#480370) Homepage

    My understanding is that there is slack/spare capacity on SSDs that are used for wear leveling. so by just writing a disk of all 0s or all 1s you may not get every location as some will be swapped out for wear leveling. I think the only way that doesn't involve physical destruction is making use of any secure erase functionality in the drive and then to be sure turning a tool like DBAN (never tried it on a SSD but I would think that there would be similar tools available) loose on it. Worse comes to worse I would just look into dropping the drive into a Linux box and running wipe [die.net] on it and if extra paranoid cycle through "cat /dev/urandom >> /dev/" and "cat /dev/zero >> /dev/" a bunch of times with wipe interleaved as well.

    --
    T-Shirts and bumper stickers [zazzle.com] to offend someone