Surface tests on drives in a RAID array
Surface tests on drives in a RAID array
I have 4x seagate ST4000NM0023 drives in a RAID 5 array which is seen as 1 virtual disk in Windows 10. Is there a way to perform a surface test on a single drive in the array? I also can't do extended self-tests on the individual drives as they loop as do the short self-tests. The controller is a PERC H700. I'm also confused about how effective the surface test is when it's testing all drives simultaneously.
- Attachments
-
- upload.jpg (364.28 KiB) Viewed 1793 times
- hdsentinel
- Site Admin
- Posts: 3128
- Joined: 2008.07.27. 17:00
- Location: Hungary
- Contact:
Re: Surface tests on drives in a RAID array
No, I'm afraid it is not possible to perform surface test on ONE specific drive of a RAID array.
Of course, this is completely normal and expected: the purpose of the RAID array is exactly to prevent accessing disk drives independently (for reading/writing any sector) as it would cause inconsistency. Instead the RAID controller needs to manage the drives together when any sector read/written (which includes testing by the surface test) - as this affects multiple drives (especially in a RAID 5 array).
We are "lucky" that we can access the member disks of the RAID array to see their disk status (health, temperature, possible problems, degradations) and ideally we can launch the internal self test functions too (Disk menu -> Short self test, Extended self test) as this does not affect a particular sector.
> I also can't do extended self-tests on the individual drives as they loop as do the short self-tests.
As I read, you CAN start the extended self test on individual drives.
It is completely normal and expected that it "loops" (if that means that it jumps back and would take VERY long time) because these hardware self tests are progressing in idle periods of the disk drive, when there is no any real disk operation (reads/writes) performed by the OS or RAID controller during pro-active error detection.
In a RAID environment, these hardware self tests may not work at all - but if works, they surely run for MUCH longer than estimated (as the estimated time provided by the manufacturer and applicable only when the disk drive performs no reads/writes at all). So these self tests may run for really long time in the case of a RAID array.
> I'm also confused about how effective the surface test is when it's testing all drives simultaneously.
The purpose of the combination of the tests is exactly to allow us to
- verify the disk drives independently (by the hardware self tests, running in the background)
- verify the complete array together (by the surface test) as it verifies the complete storage subsystem (including the cables, connections, possible backplane, the controller itself and so) in addition to the hard disk drives.
Ideally we can also (before the RAID created) test the disk drives independently (by the surface test too) exactly to verify any issue before the disks configured as a RAID array.
Did you see lower Health % and/or any problem reported by any hard disk in the RAID array?
If you prefer, please use Report menu -> Send test report to developer option, as then it is possible to check the situation and advise.
Of course, this is completely normal and expected: the purpose of the RAID array is exactly to prevent accessing disk drives independently (for reading/writing any sector) as it would cause inconsistency. Instead the RAID controller needs to manage the drives together when any sector read/written (which includes testing by the surface test) - as this affects multiple drives (especially in a RAID 5 array).
We are "lucky" that we can access the member disks of the RAID array to see their disk status (health, temperature, possible problems, degradations) and ideally we can launch the internal self test functions too (Disk menu -> Short self test, Extended self test) as this does not affect a particular sector.
> I also can't do extended self-tests on the individual drives as they loop as do the short self-tests.
As I read, you CAN start the extended self test on individual drives.
It is completely normal and expected that it "loops" (if that means that it jumps back and would take VERY long time) because these hardware self tests are progressing in idle periods of the disk drive, when there is no any real disk operation (reads/writes) performed by the OS or RAID controller during pro-active error detection.
In a RAID environment, these hardware self tests may not work at all - but if works, they surely run for MUCH longer than estimated (as the estimated time provided by the manufacturer and applicable only when the disk drive performs no reads/writes at all). So these self tests may run for really long time in the case of a RAID array.
> I'm also confused about how effective the surface test is when it's testing all drives simultaneously.
The purpose of the combination of the tests is exactly to allow us to
- verify the disk drives independently (by the hardware self tests, running in the background)
- verify the complete array together (by the surface test) as it verifies the complete storage subsystem (including the cables, connections, possible backplane, the controller itself and so) in addition to the hard disk drives.
Ideally we can also (before the RAID created) test the disk drives independently (by the surface test too) exactly to verify any issue before the disks configured as a RAID array.
Did you see lower Health % and/or any problem reported by any hard disk in the RAID array?
If you prefer, please use Report menu -> Send test report to developer option, as then it is possible to check the situation and advise.