I had a 1TB WD drive report some errors in rarely used TrueNas system.
I removed it and did a simple surface test that iirc reported 1 red sector and a handful of yellow.
I didn't save that report because its seemed like a no brainer at the time.
I simply followed the UI suggestion and performed a full Reinitialization which actually reported back all green, suggesting that the bad/problematic sectors were 'cured.'
Before I would trust the drive, i decided to run another Write Read Destructive test to make sure all is up to snuff before throwing it back into the server or repurposing it for some less critical.
This time however there were several more new errors.
Previously this was done over a usb 3 harddrive docking station which was taking a very long time to i reran both procedure using a Icydock E-Z tray cage letting it connect directly into the motherboard SATA.
Once again, Reinitialization showed perfect recovery with all green, but a follow up write read destructive showed even more damaged areas.
I ran this again just to see if there is a pattern and such did start to form.
Questions I have out of this:
1) did i cause this by running the tests so close together for so many hours at a time?
or was this deterioration is natural from test usage on a hd with poor health?
2) if this is normal. then why is reinitialization effort that is supposed to "heal" bad/poor sectors can claim such a total success, only for the drives to completely revert to their self destruction? as in, what is the purpose of reinitialization if its healing effectiveness and self reporting don't appear to be reliable?
thank you
Reinitialization vs Write Read Destructive.
- hdsentinel
- Site Admin
- Posts: 3128
- Joined: 2008.07.27. 17:00
- Location: Hungary
- Contact:
Re: Reinitialization vs Write Read Destructive.
Thanks for the images and the details.
Generally the Reinitialization and the Write+Read test both perform complete overwrite and then read back all sectors to ensure they are readable and perfect (the sector contents not altered in any ways).
The difference is (in addition that the Reinitialization performs multiple overwrites) that the Reinitialize disk surface test reads back the sectors immediately after overwritten - while the Write+Read test start to read back only after the complete overwrite cycle finished.
Seems your drive may no longer able to "hold" the data for long time after the write finished. As you can see, when the Reinitialize disk surface test writes and reads back to "refresh" the sectors, they work correctly - but after a short time, many of the sectors may begin producing errors again.
Sometimes such issues are related to connections/cables/power supply, so I'd surely recommend to check with different connection (eg. by an USB dock OR without an USB dock) but as you wrote, you already tried it, so probably this is not related to the USB dock or so.
Would be nice to know some more about the disk drive: its power on time, temperature (and highest temperature recorded) general lifetime status (power cycles, load/unload cycles or so), estimated reaining lifetime etc.
From the images, I only see that its health is relatively low - and from such/similar disk drives I'm afraid generally we can't expect too much: while the status can be fixed and usability can be improved - usually this is not "permanent" and we can expect new problems with time.
Usually only after longer time (eg. weeks/months or so) but in very rare cases, the problems can re-appear quickly, exactly as you can see.
> 1) did i cause this by running the tests so close together for so many hours at a time? or was this deterioration is natural from test usage on a hd with poor health?
No, of course you did not cause this. The disk drive should generally stabilize its status and improve the sectors. Exactly as you can see, this happened by the Reinitialise Disk Surface: during that all sectors are overwritten (cleared) and could be read back and work perfectly.
But yes, considering the relatively low health and possible other factors (eg. if the power on time was high or so which I do not know) can result that new problems can be expected.
> 2) if this is normal. then why is reinitialization effort that is supposed to "heal" bad/poor sectors can claim such a total success,
> only for the drives to completely revert to their self destruction? as in, what is the purpose of reinitialization if its healing effectiveness and self reporting don't appear to be reliable?
This is not "normal" of course, but generally disk failures/problems are also not "normal".
The Reinitialize Disk Surface test forces to repair/stabilize the sectors (which happened now) but generally from a hard disk with relative low health (and maybe relative low "estimated remaining lifetime") we can't expect too much, problems usually appear again. Sometimes after months or years only - but in some very rare cases they can reappear sooner.
You did everything absolutely perfectly that (just to be 100% sure) you started a different test too after the Reinitialize disk surface, exactly to confirm the situation - or reveal that there are further degradations/problems.
I'd be more than happy to check the status of the drive, so if possible, please use Report menu -> Send test report to developer option.
Generally the Reinitialization and the Write+Read test both perform complete overwrite and then read back all sectors to ensure they are readable and perfect (the sector contents not altered in any ways).
The difference is (in addition that the Reinitialization performs multiple overwrites) that the Reinitialize disk surface test reads back the sectors immediately after overwritten - while the Write+Read test start to read back only after the complete overwrite cycle finished.
Seems your drive may no longer able to "hold" the data for long time after the write finished. As you can see, when the Reinitialize disk surface test writes and reads back to "refresh" the sectors, they work correctly - but after a short time, many of the sectors may begin producing errors again.
Sometimes such issues are related to connections/cables/power supply, so I'd surely recommend to check with different connection (eg. by an USB dock OR without an USB dock) but as you wrote, you already tried it, so probably this is not related to the USB dock or so.
Would be nice to know some more about the disk drive: its power on time, temperature (and highest temperature recorded) general lifetime status (power cycles, load/unload cycles or so), estimated reaining lifetime etc.
From the images, I only see that its health is relatively low - and from such/similar disk drives I'm afraid generally we can't expect too much: while the status can be fixed and usability can be improved - usually this is not "permanent" and we can expect new problems with time.
Usually only after longer time (eg. weeks/months or so) but in very rare cases, the problems can re-appear quickly, exactly as you can see.
> 1) did i cause this by running the tests so close together for so many hours at a time? or was this deterioration is natural from test usage on a hd with poor health?
No, of course you did not cause this. The disk drive should generally stabilize its status and improve the sectors. Exactly as you can see, this happened by the Reinitialise Disk Surface: during that all sectors are overwritten (cleared) and could be read back and work perfectly.
But yes, considering the relatively low health and possible other factors (eg. if the power on time was high or so which I do not know) can result that new problems can be expected.
> 2) if this is normal. then why is reinitialization effort that is supposed to "heal" bad/poor sectors can claim such a total success,
> only for the drives to completely revert to their self destruction? as in, what is the purpose of reinitialization if its healing effectiveness and self reporting don't appear to be reliable?
This is not "normal" of course, but generally disk failures/problems are also not "normal".
The Reinitialize Disk Surface test forces to repair/stabilize the sectors (which happened now) but generally from a hard disk with relative low health (and maybe relative low "estimated remaining lifetime") we can't expect too much, problems usually appear again. Sometimes after months or years only - but in some very rare cases they can reappear sooner.
You did everything absolutely perfectly that (just to be 100% sure) you started a different test too after the Reinitialize disk surface, exactly to confirm the situation - or reveal that there are further degradations/problems.
I'd be more than happy to check the status of the drive, so if possible, please use Report menu -> Send test report to developer option.