Silent Data Corruption
Discussions center on detecting and preventing silent data corruption or bit rot using filesystems like ZFS and BTRFS with checksums, scrubs, and RAID, versus traditional filesystems, emphasizing the role of backups and error correction.
Activity Over Time
Top Contributors
Keywords
Sample Comments
You could use ZFS, then the file cannot silently become corrupted.
Maybe it works like a RAID array with parity data to repair corruption :-)
Generally I'd recommend using a filesystem designed for that. Something like btrfs or (I think) ZFS, which have checksums built into the filesystem (and if you set them up in a RAID configuration, these checksums can be used to correct data as well)
Doesn't ZFS have a mechanism for periodically checking for and correcting bit rot?
what if the corruption only affected the stored checksum, but not the data itself?
Zfs has checksumming and scrubs, which can catch lots of data corruption that (most?) other filesystems can't catch catch at all
I don't know the numbers, but the probability of getting 26 corrupted at-rest files through natural causes sounds pretty much like winning the lottery twice on the same day you were struck by lightning twiceChecksums wouldn't have fixed this, they'd only alert the user to the fact the damage had already been done, which is exactly what the decompressor did in its own special way.As another comment points out, error correcting codes are the way to handle this, and its already
Why would the file system affect bit rot rate?
I would generally suggest you're more likely to corrupt/lose your whole backup than to have one corrupted bitflip not addressed by the filesystem or underlying storage.
Your solution does not protect you from silent data corruption.