Stories
Slash Boxes
Comments

SoylentNews is people

posted by mrpg on Friday March 17 2017, @05:14AM   Printer-friendly
from the encrypt-for-the-win dept.

How do you destroy an SSD?

First, let's focus on some "dont's." These are tried and true methods used to make sure that your data is unrecoverable from spinning hard disk drives. But these don't carry over to the SSD world.

Degaussing – applying a very strong magnet – has been an accepted method for erasing data off of magnetic media like spinning hard drives for decades. But it doesn't work on SSDs. SSDs don't store data magnetically, so applying a strong magnetic field won't do anything.

Spinning hard drives are also susceptible to physical damage, so some folks take a hammer and nail or even a drill to the hard drive and pound holes through the top. That's an almost surefire way to make sure your data won't be read by anyone else. But inside an SSD chassis that looks like a 2.5-inch hard disk drive is actually just a series of memory chips. Drilling holes into the case may not do much, or may only damage a few of the chips. So that's off the table too.

Erasing free space or reformatting a drive by rewriting it zeroes is an effective way to clear data off on a hard drive, but not so much on an SSD. In fact, in a recent update to its Mac Disk Utility, Apple removed the secure erase feature altogether because they say it isn't necessary. So what's the best way to make sure your data is unrecoverable?


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by NotSanguine on Friday March 17 2017, @06:15AM (12 children)

    Erasing free space or reformatting a drive by rewriting it zeroes is an effective way to clear data off on a hard drive, but not so much on an SSD

    That statement is ridiculous on its face. If you write zeros (or random data) over all the sectors, the data is gone. WTF?

    Here's an example on a file (I used a Mac since that's what discussed in TFA):

    galas-mac:~ ${USER}$ uname -a
    Darwin galas-mac.${DOMAIN} 16.3.0 Darwin Kernel Version 16.3.0: Thu Nov 17 20:23:58 PST 2016; root:xnu-3789.31.2~1/RELEASE_X86_64 x86_64
    galas-mac:~ ${USER}$ cat /etc/services > ./test.file
    galas-mac:~ ${USER}$ ls -l test.file
    -rw-r--r-- 1 ${USER} ${GROUP} 677972 Mar 17 01:44 test.file
    galas-mac:~ ${USER}$ dd if=/dev/random of=./test.file bs=512 count=1500
    1500+0 records in
    1500+0 records out
    768000 bytes transferred in 0.054651 secs (4684267 bytes/sec)

    As you can see I overwrote 'test.file' with random (okay, pseudo-random) data.

    To erase an entire disk, it would be something similar to:
    dd if=/dev/random of=/dev/diskx bs=1048576 count=${disk size in MB}
    where /dev/diskx is the disk you wish to erase.

    You could also use an 'if' parameter of /dev/zero (which would write zeroes instead of pseudo-random data).

    If you're really paranoid, you could do multiple passes:
    COUNTER=1
    while [ $COUNTER -lt 6 ]
                    do
                    dd if=/dev/random of=./test.file bs=512 count=1500
                    let COUNTER=COUNTER+1
                    done

    --
    No, no, you're not thinking; you're just being logical. --Niels Bohr
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 3, Informative) by sgleysti on Friday March 17 2017, @06:24AM (1 child)

    by sgleysti (56) Subscriber Badge on Friday March 17 2017, @06:24AM (#480248)
    Quoting from the following link
     
    http://nvsl.ucsd.edu/index.php?path=projects/sanitize [ucsd.edu]

    Sanitization is well-understood for traditional magnetic storage, such as hard drives and tapes. Newer Solid State Disks (SSDs), however, have a much different internal architecture, so it is unclear whether what has worked on magnetic media will work on SSDs as well.

    Our results show that naively applying techniques designed for sanitizing hard drives on SSDs, such as overwriting and using built-in secure erase commands is unreliable and sometimes results in all the data remaining intact. Furthermore, our results also show that sanitizing single files on an SSD is much more difficult than on a traditional hard drive.

    • (Score: 0) by Anonymous Coward on Friday March 17 2017, @09:32AM

      by Anonymous Coward on Friday March 17 2017, @09:32AM (#480318)

      "built-in secure erase commands is unreliable and sometimes results in all the data remaining intact. "

      Yes, the secure erase command just tosses the encryption key. All modern SSD's are encrypted by default with a unique key that is changed when you invoke the secure erase command at firmware level. This is a secure method of formatting without having to write to every cell.

  • (Score: 2) by Arik on Friday March 17 2017, @06:38AM (4 children)

    by Arik (4543) on Friday March 17 2017, @06:38AM (#480254) Journal
    This is the thing though, it's not a trivial task to write to a sector on an SSD.

    Normally you send it directives as if it were an HDD but it has an internal controller that interprets those instructions as it sees fit.

    This is necessary in order to perform 'wear leveling' among other things. So just because you tell it to overwrite a file doesn't mean it actually will.

    And even if it did, there's still the possibility that a sophisticated attacker could scan the chips directly and distinguish between a 1 that was a 0 last time, and a 1 that was a 1 last time, as well.

    No, you can't count on that working at all.
    --
    If laughter is the best medicine, who are the best doctors?
    • (Score: 1) by shrewdsheep on Friday March 17 2017, @09:50AM (3 children)

      by shrewdsheep (5215) on Friday March 17 2017, @09:50AM (#480320)

      Normally you send it directives as if it were an HDD but it has an internal controller that interprets those instructions as it sees fit.

      And that would be the same for a HDD. There are spare sectors (up to the full capacity) in HDDs, too, and the sector mapping is abstracted in a comparable way to SSDs. The difference being that physical proximity matters more in HDDs than in SSDs.

      And even if it did, there's still the possibility that a sophisticated attacker could scan the chips directly and distinguish between a 1 that was a 0 last time, and a 1 that was a 1 last time, as well.

      To my knowledge this a lore of old by now. HDDs as well as SSDs are going close to the limit of reliability when storing data. Overwriting once will not leave much of a trace. Also a quick web search reveals that SSD recovery services do not take apart the drives. So it appears that recovery from the chips themselves is not happening at the moment.

      Overwriting multiple times with whatever input (zeros or random, which doesn't matter as pointed out elsewhere in this thread) should guarantee a better full overwrite in SSDs due to wear leveling as compared to HDDs.

      • (Score: 0) by Anonymous Coward on Friday March 17 2017, @01:02PM (1 child)

        by Anonymous Coward on Friday March 17 2017, @01:02PM (#480371)

        And that would be the same for a HDD. There are spare sectors (up to the full capacity) in HDDs, too, and the sector mapping is abstracted in a comparable way to SSDs. The difference being that physical proximity matters more in HDDs than in SSDs.

        Spinning drives have extra sectors, yes, but unlike SSDs the extra storage is not routinely used. They are only used when the original sectors fail. You can check the SMART counters to see if this has happened on your drive(s).

        Typically, having more than zero remapped sectors on a hard disk is an indication of imminent failure.

        • (Score: 2) by Immerman on Friday March 17 2017, @03:52PM

          by Immerman (3985) on Friday March 17 2017, @03:52PM (#480465)

          >Typically, having more than zero remapped sectors on a hard disk is an indication of imminent failure.

          Is it? That's not my experience. My understanding was that the are a number of highly localized manufacturing flaws that can be present on a platter that may cause small clusters of sectors to fail, without indicating anything substantial about the drive as a whole. It's when you start seeing the number of bad sectors climb rapidly that drive failure is imminent.

          Of course, best practice is to replace the drive at the first sign of trouble just in case it's a sign of something serious, since a serious problem can worsen very rapidly and your first warning may also be your last. But I believe that if you have a drive with only a few bad sectors, the odds are high that it doesn't have any serious problems - hammer it hard, with say several full-disk rewrites, and if the number of bad sectors stabilizes quickly you've probably got a lot of life left in the drive.

          I might not trust it with anything really critical, but I wouldn't really trust *any* single drive for that. I've had many, many drives work without problems for years with a few bad sectors, until they were eventually retired as obsolete. And only a few that have actually developed serious problems, both of which failed so fast that I salvaged data file by file starting with the most critical because the computer would inevitably crash at some point and usually take the remainder of the active folder with it.

      • (Score: 2) by Arik on Saturday March 18 2017, @03:04AM

        by Arik (4543) on Saturday March 18 2017, @03:04AM (#480751) Journal
        Well, yeah, to a degree that is true as well, and I am not advocating you trust an HDD controller either. But it's particularly obvious that you can't trust that what happens inside an SSD is going to match the commands you sent.

        Either way, physical destruction is best and trusting anything less is probably not the wisest course. Overkill is better than underkill here.

        "Also a quick web search reveals that SSD recovery services do not take apart the drives. So it appears that recovery from the chips themselves is not happening at the moment."

        'Appears' being the keyword here. Trusting appearances doesn't always backfire but often enough it's still a really bad idea.

        --
        If laughter is the best medicine, who are the best doctors?
  • (Score: 1) by Soylentbob on Friday March 17 2017, @08:25AM (2 children)

    by Soylentbob (6519) on Friday March 17 2017, @08:25AM (#480295)

    As you can see I overwrote 'test.file' with random (okay, pseudo-random) data.

    And the SSD will point to the new data when asked for the file. But nevertheless, the original data is most likely still on the disk.
    Solid-State Drives: The How [makeuseof.com] gives some easily digestible insight.

    To erase an entire disk, it would be something similar to:
    dd if=/dev/random of=/dev/diskx bs=1048576 count=${disk size in MB}
    where /dev/diskx is the disk you wish to erase.

    Following the explanation of the same link I wrote above, that should work if you overwrite the whole disk. It might not work if you overwrite a partition on the disk or part of the disk, though.

    It does not defeat some of the more esoteric attacks, though (data remanence in flash memory, memory sections previously disabled due to failure causes etc.) But to be honest, I wouldn't worry too much about these scenarios.

    • (Score: 2) by Immerman on Friday March 17 2017, @06:40PM (1 child)

      by Immerman (3985) on Friday March 17 2017, @06:40PM (#480551)

      That's the standard technique for HDDs, but unfortunately you can't count on it working on an SSD. An SSD that says it holds 100GBs will only let the PC you access 100GB, but internally it has maybe another 10+GB of storage that gets rotated into use by the wear leveling algorithm. Rewrite the entire disk, and there's a 10+% chance that your sensitive data is still stored in that extra space, currently rotated out of usage.

      Depending on the details of the wear leveling algorithm, your sensitive data may also have ended up on a cell that was considerably more worn than most, in which case even wiping the "entire" drive hundreds of times won't guarantee that the cell comes back into circulation to be written. At worst, your data may have been the last thing written to a cell before it was retired as overused, in which case it will *never* come back into circulation, but still be accessible by transferring the chip to a dedicated reader.

      • (Score: 1) by Soylentbob on Friday March 17 2017, @06:56PM

        by Soylentbob (6519) on Friday March 17 2017, @06:56PM (#480562)

        I guess it is a matter of the required confidence-level... For normal use, after filling the disk 2-3 times with reasonable small files, I'd consider it safe enough. Someone actually opening tge device, unsoldering the chips and so on is probably not even for a criminal investigation realisti; maybe for state-secrets. In such cases I would expect the drive to have been encrypted.
        (Actually, that is pretty standard for my personal laptop as well, and mandatory for most company laptops.)

  • (Score: 4, Informative) by Immerman on Friday March 17 2017, @08:39AM

    by Immerman (3985) on Friday March 17 2017, @08:39AM (#480303)

    As others said, SSDs work fundamentally differently than HDDs. Most importantly in the fact that HDDs use physical addressing - when you say "access sector 17" it goes out to sector 17 and accesses it. A SSD though uses logical addressing and instead goes to a lookup table to find out "okay, logical sector 17 is currently mapped to physical cell 94", and then goes to access cell 94 - all completely invisible to the PC.

    So you write sensitive data to "sector 17", which actually ends up in cell 94. Then you try to overwrite it - "write this garbage to sector 17" - and the SSDs wear leveling algorithm goes out and grabs whatever least-used cell is available, maybe 106, and writes your garbage to that, and updates the logical sector map to say 17 now maps to 106. Cell 94, which still contains your sensitive data, never gets touched.

    So you figure, heck, I'll just overwrite *everything*, that'll catch it, right? Wrong, because SSD manufacturers know that neither their manufacturing nor wear leveling are perfect, and some cells will wear out long before others. So they include a bunch more cells than are addressable by the PC so that it has "spares" to replace cells that wear out. PC thinks SSD has 100 sectors, but it actually has 120 cells. fill the drive with garbage, and all the current "extra" cells remain untouched. If you're lucky a few consecutive re-writes will bring cell 94 back into circulation and it will get overwritten, but there's no guarantee of that. And if you're unlucky, the first time it goes to update cell 94 it notices that it's about to wear out, and so dumps it in the "broken" pile, data and all, never to be touched again. Unless someone desolders the containing chip and puts it in a chip reader, in which case there's your data.

    There's really only one way to reliably wipe the data - store it as random garbage in the first place (aka encrypted), which many/most SSDs support natively, so that when you destroy the keys there's no longer any way to turn the garbage back into data.

    Many drives offer a "disk wipe" option as well, a special command that rapidly internally deletes *everything*, but due to the nature of SSD cell failure, you can't count on that if it's important - failing cells tend to become write-resistant while still being readable, so just because the drive *thinks* it wiped a cell, doesn't mean he previous data is actually gone.

  • (Score: 3, Informative) by VLM on Friday March 17 2017, @12:17PM

    by VLM (445) Subscriber Badge on Friday March 17 2017, @12:17PM (#480353)

    Everyone is giving you the long answer but the short answer is the world is analog and based on experience with magnetic media like cassette tapes and UV erased eproms, everything is analog fundamentally and its "easy" to mess with stuff at the analog level to apply your own ratio of signal to noise ratio and recover the data.

    You never wrote a zero to that eprom cell. You injected a vaguely unclear number of electrons in that floating gate that you can hopefully measure later, by shoving a known current pulse which is caused by a known voltage pulse after you think you shorted than floating gate to ground by a transistor of unclear resistance and time is money so they cut the erase time as short as possible. And in electronics nobody pays for tolerances more than 1% or so and there is/was lots of good EE work done at the 20% and 10% tolerance level. And all this stuff scales with temperature and you have no idea if its -65C or 120C but you think it might work. And it all probably depends on clock speed and you don't know that either.

    Most of the time it kinda works. There's an electric field on that floating gate corresponding to 100 electrons ideally meaning a zero. But in the real world 99 electrons means there was a 0 stored there last time and 101 means there was a 1 stored there last time and the usual reading algo says less than 110 means a 0, but if you work around that its all kinds of fun.

    Its actually very similar to breaking XOR encryption so a series of eeprom cells with 99, 101, 99, 101 electrons is read as hex F aka 1111 but due to poor erasing (time is money and people want high performance...) you subtract the current value and the last value held in that eeprom was hex 5 aka 0101

    In the old days UV erasable eproms were black magic and leaving them in weak UV fields like office lights or indirect sunlight led to truly weird bit patterns appearing. Also poor programmer burn timings could turn an eprom into a prom, essentially, plus or minus crazy 3 hour erase timings. Of course long erasure timings did weird things to chips too such as burn permanent zeros into them.

    In summary in analog world a lot of stuff barely works and barely works in security speak is an attack vector to be exploited.