I am not sure that I chose the proper forum for this, but here goes.
My problem began with a RAID 1 array, when I began to get disk failure messages.
I was using the IRST software, etc to manage 2 1TB drives. (WD Blue, Windows 10 upgrade)
They were deployed in 2013 and have approx 2.5 years of power on.
So yeah, maybe one's failing.
This RAID is for data storage only. Not boot. My OS and programs are on an SSD.
I fiddled with cables etc, and decided to upgrade to 2TB drives.
So the RAID 1 is now up and running with 2 new 2TB Seagates.
Using an external dock, I discovered that the "bad" WD drive still had all the data on it.
But sometimes it would be "bad." Simply gone, and/or sometimes "bad" according to disk management.
(The other disk seems fine, and functions as an external.)
I started fiddling with the bad disk, removed the partition, etc. Rebuild MBR.
I used minitool partition wizard. After awhile, the disk seemed to be working again.
I ran chkdsk [letter]: /r
CHKDSK found no problems with the drive.
I have copied data to the drive, and it reads fine.
Here's my question.
I used minitool to run a surface test, a few times, and it reports all the blocks have read errors.
I also tried HD tune error check, and it says the same thing.
All the blocks. 100% errors.
But the drive works.
And CHKDSK concurs.
Why are these other error tests giving read error results?
Does anybody have any idea what's up with these read errors?
Thanks for your help.
Edited by senseless, 04 November 2017 - 11:17 AM.