RAID Configurations Reliability
Discussions focus on various RAID levels like RAID 5, 6, 10, RAIDZ, and alternatives such as ZFS or Btrfs, debating their reliability, fault tolerance, rebuild risks, performance, and the necessity of backups.
Activity Over Time
Top Contributors
Keywords
Sample Comments
RAID is fine. As long as it's RAID 10 haha.
RAID is not a backup guarantee. Use zfs.
RAID1 meaning you get one chance. Just move to RAID10 if you're worried. (/s)
That's true with any hardware level raid. Do raid in software or better yet, use btrfs.
RAID10 (mdraid+lvm+xfs (never use lvm raid) or btrfs) is way more convenient in terms of rebuild speed, simplicity, performance, and also supports online growing and online shrinking (btrfs only). If there are any failures in a batch, it signals the need for possible proactive replacement. The biggest predictor Google found that SMART doesn't catch is slightly elevated temperatures. ZoL (as opposed to SmartOS/Solaris ZFS) bit me before (array permanently unmountable on good drives) and
Why not Raid 60 with Btrfs? It'll tolerate two disk loss with pro-active parity protection via btrfs and be faster and provide you with more disk space.
You should probably be using RAID Z2 in that case which supports simultaneous 2 drive failure without a problem
I'm using RAIDZ2 with 8 drives. It's even more fault tolerant than traditional RAID1 (when you lose <= 2 drives at once) and still only eats 2/n x capacity (n=8) for parity.
Raid 5/6 should not be used. Other levels work just fine. This is not a raid-in-general problem.
Use a RAID5 and hope the write hole doesn't eat it all =(