Jump to content


Register a free account to unlock additional features at BleepingComputer.com
Welcome to BleepingComputer, a free community where people like yourself come together to discuss and learn how to use their computers. Using the site is easy and fun. As a guest, you can browse and view the various discussions in the forums, but can not create a new topic or reply to an existing one unless you are logged in. Other benefits of registering an account are subscribing to topics and forums, creating a blog, and having no ads shown anywhere on the site.

Click here to Register a free account now! or read our Welcome Guide to learn how to use this site.


RAID 5 members fail constantly

  • Please log in to reply
1 reply to this topic

#1 blairgerman


  • Members
  • 1 posts
  • Local time:07:03 AM

Posted 11 January 2016 - 06:49 PM

I've got an Intel ICH10R motherboard integrated controller running a RAID 5 with 3 1tb Western Digital RE class members.


A few months back a drive failed at about 1 year old on port 1. I replaced it. Then shortly after, another member on a failed on port 2, I thought that was odd, but just replaced it as well.


Then, shortly after, the drive I'd just replaced on port 1 failed AGAIN.


AT this point I figured something other was wrong than just failing drives.


I couldn't find anything wrong. I replaced the sata cables. I replaced PSU even though voltages were okay, (I figured it could be an intermittent thing... I bought one with a voltage monitor)


Things were great for a while and I figured maybe it was the PSU.


Today the member on port 1 failed again. This is the 3rd drive to fail on that port.


By failed I mean, Matrix Storage Console v8.9 says just says status "failed" and when I boot the controller BIOS lists it as "failed".


I took this last drive out and hooked it up with a USB adapter, formatted it, and ran CHKDSK and it is fine, so I don't know what "failed" means... maybe a SMART failure? I just know it drops it as a member and says failed.


I'm gonna stick this drive back in and wait the 2 days it takes to rebuild the array and see if it lasts. Little tired of buying HDDs right now.


I also updated the intel RST windows driver today, but I don't see how that'd be an issue since the failure shows up in the controller bios.


I'm looking at a BIOS update for my MBD now. ( I assume that's how the controller firmware would be updated since it's on-board)




MBD: SuperMircro C7X58 intel x58 Chipset

CPU: i7 975 Extreme Edition 3.33

Memory: 24gig Corsair Vengeance PC3 12800 (6x4gig) TriChannel running in XMP mode

HDDs: 3x WD Re Enterprise Class 1tb 7200rpm 64mb cache SATA3 (WD1003FBYZ)

GPU: 2x NVidia GeForce GTX 260 SLI (sli turned off right now)

PSU: Corsair AXi 1200watt

a case with a ton of fans, a DVDRW, stock cpu fan

Windows 7 64 SP1

running ISS 7.5 and an email server


BC AdBot (Login to Remove)


#2 hamluis



  • Moderator
  • 56,411 posts
  • Gender:Male
  • Location:Killeen, TX
  • Local time:06:03 AM

Posted 11 January 2016 - 08:23 PM

FWIW:  When a drive "fails"...that has nothing to do with the functional status of the file system.  Chkdsk /r is a file system tool, it can only attempt to repair detected items in the file system (NTFS) and the files stored on the partition it is checking.  It does absolutely nothing in regard to a "failing/failed" hard drive, which is a functional assessment that the hard drive is incapable of doing the job it was designed to do.  If you suspect a "failed" hard drive, you need to run a hard drive diagnostic to assess whether the drive is capable of functioning satisfactorily.


If I had a sequence of "failed drives"...I'd take a good look at the PSU and the cables used to connect those drives properly to motherboard and PSU.



Edited by hamluis, 11 January 2016 - 08:25 PM.

0 user(s) are reading this topic

0 members, 0 guests, 0 anonymous users