Hey guys,

I want to shred/sanitize my SSDs. If it was a normal harddrive I would stick to ShredOS / nwipe, but since SSD’s seem to be a little more complicated, I need your advice.

When reading through some posts in the internet, many people recommend using the software from the manufacturer for sanitizing. Currently I am using the SSD SN850X from Western digital, but I also have a SSD 990 PRO from Samsung. Both manufacturers don’t seem to have a specialized linux-compatible software to perform this kind of action.

How would be your approach to shred your SSD (without physically destroying it)?

~sp3ctre

  • 𝕽𝖚𝖆𝖎𝖉𝖍𝖗𝖎𝖌𝖍@midwest.social
    link
    fedilink
    arrow-up
    3
    arrow-down
    5
    ·
    6 hours ago

    I did some light reading. I see claims that wear leveling only ever writes only to zeroed sectors. Let me get this straight:

    If I have a 1TB ssd, and I write 1TB of SecretData, and then I delete and write 1TB of garbage to the disk, it’s not actually holding 2TB of data, with the SecretData hidden underneath wear leveling? That’s the claim? And if I overwrite that with another 1TB of garbage it’s holding, what now, 3TB of data? Each data sequence hidden somehow by the magic of wear leveling?

    Skeptical Ruaraidh is skeptical. Wear leveling ensures data on an SSD is written to free sectors with the lowest write count. It can’t possibly be retaining data if data the maximum size of the device is written to it.

    I see a popular comment on SO saying you can’t trust dd on SSDs, and I challenge that: in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data. Otherwise, someone’s invented the storage version of a perpetual motion device. To be safe, sync and read it, and maybe dumb again, but I really can’t see how an SSD world hold more data than it can.

    dd if=/dev/random of=/dev/sdX bs=2048 count=524288

    If you’re clever enough to be using zsh as your shell:

    repeat 3 (dd if=/dev/random of=/dev/sdX bs=2048 count=524288 ; sync ; dd if=/dev/sdX ba=2048)

    You reduce every single cell’s write lifespan by 2 times; with modern life spans of 3,000-100,000 writes per cell, it’s not significant.

    Someone mentioned blkdiscard. If you really aren’t concerned about forensic analysis, this is probably the fastest and least impactful answer: it won’t affect cell lives by even a measly 2 writes. But it also doesn’t actually remove the data, it just tells the SSD that those cells are free and empty. Probably really hard to reconstruct data from that, but also probably not impossible. dd is a shredding option: safer, slower, and with a tiny impact on drive lifespan.

    • bizdelnick@lemmy.ml
      link
      fedilink
      arrow-up
      2
      ·
      5 hours ago

      /dev/random, seriously? This will take ages and have no advantages over /dev/zero. Even when you really need to fill your drive with random data, use /dev/urandom, there’s a chance that this will finish in couple days at least. And no, there’s no guarantee that it will wipe all blocks because there are reserved blocks that only device firmware can access and rotate. Some data on rotated blocks still can be accessible for forensic analysis if you care about this.

      • catloaf@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        I think most modern distros use urandom for random too. These days, the PRNG is strong enough it doesn’t need to stop and wait for more entropy.