Posted 14 May 2009 - 01:24 PM
When a drive (in a redundant raid setup) has a bad sector, it tries to reread it, if it can't, it checks if the drive has it in it's cache (most drives have 16 or 32MB onboard), if it's not there, the raid controller checks if it's in raid cache. If it's not there either, it recreates the data in the bad sector using the raid redundancy.
The bigger the drive, the more sectors there are and the higher the chance of running into a bad sector.
When a raid 5 has a failed drive and is either rebuilding to a hotspare or is awaiting you to replace the failed drive and during this time it runs into a bad sector, it cannot recreate the data. If your computer had relevant data in that sector, you could have some dataloss or data corruption. There is also a chance that a 2nd drive fails during this window as most people would use drives that are about the same age and same model, so when a drive fails (other than the first 90 day or so window), it's not impossible that another drive is close to failing as well.
With a raid 6 there are 2 parity calculations and if 1 drive has failed you still have a 2nd parity to recreate missing data. You can also lose 2 drives before you're in a degraded (non-redundant) state.
The downside to raid 6 is the performance degradation as 2 parity calculations need to be done by the raid controller.
I'd recommend raid 6 for larger SATA raid setups. With just 4 drives you may want to consider just going raid 10 for more performance and each drive has a full mirror to another drive (the most redundant option of standard raid types).