Slash Boxes

SoylentNews is people

posted by martyb on Sunday May 10 2015, @10:37PM   Printer-friendly
from the powerlessness dept.

An SSD stored without power can start to lose data in as little as a single week on the shelf, depending on several factors. When most drives storage were mechanical, there was little chance of data loss or corruption so quickly as long as the environment in the storage enclosure maintained reasonable thresholds. The same is not true for SSDs and the Joint Electron Device Engineering Council (JEDEC), which defines standards for the microelectronics industry including standards for SSDs, shows in a presentation that for every 5 degrees C (9 degrees F) rise in temperature beyond the optimal where the SSD is stored the data retention period is approximately halved.

In a presentation by Alvin Cox on JEDEC's website titled "JEDEC SSD Specifications Explained" [PDF warning], graphs on slide 27 show that for every 5 degrees C (9 degrees F) rise in temperature where the SSD is stored, the retention period is approximately halved. For example, if a client application SSD is stored at 25 degrees C (77 degrees F) it should last about 2 years on the shelf under optimal conditions. If that temperature goes up 5 degrees C, the storage standard drops to 1 year.

[...] When you receive a computer system for storage in legal hold, drive operating and ambient storage temperature are probably not the first things on tap to consider. You cannot control the materials that comprise the drive and the prior use of the drive. You can control the ambient temperature of the storage which will potentially aid in data retention. You can also ensure that power is supplied to the drives while in storage. More importantly, you can control how the actual data is retained.

[...] What started this look into SSDs? An imaging job of a laptop SSD left in storage for well over the 3-month minimum retention period quoted by the manufacturer of the drive before it was turned over to us. This drive had a large number of bad sectors identified during the imaging period. Not knowing the history, I did not consider the possibility of data loss due to the drive being in storage. Later, I learned that the drive was functioning well when it had been placed into storage. When returned to its owner a couple of months after the imaging, the system would not even recognize the drive as a valid boot device. Fortunately, the user data and files were preserved in the drive image that had been taken, thus there was no net loss.

Now imagine a situation in which an SSD was stored in legal hold where the data was no longer available for imaging, much less use in court. Ignorance of the technology is no excuse, and I am sure the opposing counsel would enjoy the opportunity to let the court know of the "negligent" evidence handling in the matter.

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by RedBear on Monday May 11 2015, @04:43AM

    by RedBear (1734) Subscriber Badge on Monday May 11 2015, @04:43AM (#181359)

    So, we're just talking about data loss here, right? As in, sectors on the unpowered SSD losing a verifiable value of whether they hold "ones" or "zeros"? If you repartition and reformat the drive, can you then continue to use it as normal? Do the corrupted sectors go right back to being perfectly usable?

    I'm assuming this issue isn't the same thing as SSD sectors permanently going bad due to being overwritten too many times, but I'd like some verification.

    ¯\_ʕ◔.◔ʔ_/¯ LOL. I dunno. I'm just a bear.
    ... Peace out. Got bear stuff to do. 彡ʕ⌐■.■ʔ
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 5, Interesting) by TheRaven on Monday May 11 2015, @08:07AM

    by TheRaven (270) on Monday May 11 2015, @08:07AM (#181398) Journal
    Yes. Flash cells hold a value that, over time, becomes harder to distinguish. I'm a bit sceptical about the claims in TFA, because normal wear levelling algorithms mark a cell as bad if it can't be expected to hold a discernible value for a year. There was a nice paper at EuroSys last year proposing exposing this to the OS so that the cells that can still hold data reliably for a day can be used for scratch space (the OS can explicitly refresh them faster, but not use them for anything that you'd want to maintain across reboots, e.g. swap, caches and so on).
    sudo mod me up