Hello,
I have a lightly used 3TB WD30EZRZ hard drive that I've been using for occasional cold backups since 2017. The drive has only run for about 150 hours in total. However, almost immediately, I noticed an unusual increase in its [Raw Read Error Rate], which is not something I've typically seen with WD drives. The decimal value was sitting at 4 for quite some time, but it gradually increased to 30 over the last few months.
A few days ago, things took a turn for the worse. The drive started showing a rapid increase in [Reallocated Sector Count] (now over 600) and [Reallocated Event Count] (around 60). I'm recalling these numbers from memory but will post a screenshot or a text log as soon as I can.
This drive was used in an external HDD enclosure. I started to suspect that its PSU or SATA to USB logic board might be the cause, so I decided to take the drive out from the enclosure and connect it to my desktop PC directly. Upon noticing the reallocated sectors, I cleaned the contacts under the HDD's logic board, which were a bit corroded. Despite this, the reallocates continued to increase, albeit at a slower pace.
Interestingly, ever since I did a full drive wipe a couple of days ago, the reallocated sector/event values stayed the same this whole time. I ran a full surface test on this drive yesterday, and it also didn't show any signs of weak sectors or bad blocks.
This is my first experience with a hard drive exhibiting such a high number of reallocated sectors and events. I'm looking for any advice or insights:
1. How much service life might this drive have left?
2. What could be the reason for such premature problems in a drive with only 150 hours of power-on time and fewer than 200 start/stop cycles?
Any help or shared experiences would be greatly appreciated!
Thanks!
Hard Drive with Rapidly Increasing Reallocates
- hdsentinel
- Site Admin
- Posts: 3128
- Joined: 2008.07.27. 17:00
- Location: Hungary
- Contact:
Re: Hard Drive with Rapidly Increasing Reallocates
Generally I'm afraid what you see is "normal" (even if not expected and not too ideal). Usually the reallocated sector count attribute increases this way: there is nothing for (very) long time - and then dramatically increase when the drive attempts to use (read/write) the problematic area.
Not sure if the drive tested before using for real storage?
Not rare that a drive may seem perfect for very long time, even years (!) until the read/write head starts using a particular area.
A typical such case described at
https://www.hdsentinel.com/hard_disk_case_bad_sectors.php
showing that the drive was perfect - until filled completely (as the problems were on the end of the disk surface).
Generally this is why it is good idea to perform tests, even on a new drive before using for real storage, as suggested at
https://www.hdsentinel.com/faq.php#tests
exactly to reveal such issues long before the drive filled with important, critical data - or verify/confirm that it is surely perfect.
Yes, a complete disk testing (and maybe the wipe itself) can stabilize the sectors, so then ideally the counter should stop and do not change any more. But if the actual bad sector count is high (so the Health % displayed in Hard Disk Sentinel is low) then we can expect even more - and the originally reserved spare area could fill quickly.
Yes, you're absolutely correct: personally I'd surely do the same: PSU and/or the USB - SATA board can often cause problems. Many times an older PSU can also cause damage of the disk drive (but usually this happens only when the drive and the power supply used intensively eg. in 24/7 mode).
Generally it is good idea to test/verify by different operation environment, eg. by connecting to direct SATA port of the motherboard.
Cleaning contacts is also an excellent idea for an older drive. If the contacts showed corrosion, it may indicate high humidity during storage - which could cause problems too.
Without seeing the Health % and the errors in the text description in Hard Disk Sentinel it is hard to say anything for sure about the expected life. The displayed details and the "estimated remaining lifetime" can be a generic guide about how/when it is recommended to consider replacement - especially if the hard disk drive contains mission critical data. For non-critical data, the drive may be still used (even for long time) considering the relatively low power on time and start/stop count you wrote.
I'd surely perform the suggested steps in Support -> Frequently Asked Questions -> How to repair hard disk drive? How to eliminate displayed hard disk problems?
( https://www.hdsentinel.com/faq_repair_hard_disk_drive.php )
This page recommends the above mentioned tests - to verify/confirm that the status is now stable, no new problems/errors and the drive can read/write all sectors - without delays/slowness, retries. So there should be no yellow / red blocks in the surface test. Some darker green blocks are acceptable, but if they form a larger area, it may indicate problems in the future.
(a simple wipe, format, chkdsk and similar solutions happily ignore the above, so I'd not waste time with them).
If the drive works stable (and no new errors reported, the Health % will not change), then you can continue using it. Then you can even acknowledge the problems, to remove them from the text description and be notified about possible new issues / bad sectors only (as described on the above link) - but I'd use only with constant monitoring and backup upon any new problem/issue.
Not sure if the drive tested before using for real storage?
Not rare that a drive may seem perfect for very long time, even years (!) until the read/write head starts using a particular area.
A typical such case described at
https://www.hdsentinel.com/hard_disk_case_bad_sectors.php
showing that the drive was perfect - until filled completely (as the problems were on the end of the disk surface).
Generally this is why it is good idea to perform tests, even on a new drive before using for real storage, as suggested at
https://www.hdsentinel.com/faq.php#tests
exactly to reveal such issues long before the drive filled with important, critical data - or verify/confirm that it is surely perfect.
Yes, a complete disk testing (and maybe the wipe itself) can stabilize the sectors, so then ideally the counter should stop and do not change any more. But if the actual bad sector count is high (so the Health % displayed in Hard Disk Sentinel is low) then we can expect even more - and the originally reserved spare area could fill quickly.
Yes, you're absolutely correct: personally I'd surely do the same: PSU and/or the USB - SATA board can often cause problems. Many times an older PSU can also cause damage of the disk drive (but usually this happens only when the drive and the power supply used intensively eg. in 24/7 mode).
Generally it is good idea to test/verify by different operation environment, eg. by connecting to direct SATA port of the motherboard.
Cleaning contacts is also an excellent idea for an older drive. If the contacts showed corrosion, it may indicate high humidity during storage - which could cause problems too.
Without seeing the Health % and the errors in the text description in Hard Disk Sentinel it is hard to say anything for sure about the expected life. The displayed details and the "estimated remaining lifetime" can be a generic guide about how/when it is recommended to consider replacement - especially if the hard disk drive contains mission critical data. For non-critical data, the drive may be still used (even for long time) considering the relatively low power on time and start/stop count you wrote.
I'd surely perform the suggested steps in Support -> Frequently Asked Questions -> How to repair hard disk drive? How to eliminate displayed hard disk problems?
( https://www.hdsentinel.com/faq_repair_hard_disk_drive.php )
This page recommends the above mentioned tests - to verify/confirm that the status is now stable, no new problems/errors and the drive can read/write all sectors - without delays/slowness, retries. So there should be no yellow / red blocks in the surface test. Some darker green blocks are acceptable, but if they form a larger area, it may indicate problems in the future.
(a simple wipe, format, chkdsk and similar solutions happily ignore the above, so I'd not waste time with them).
If the drive works stable (and no new errors reported, the Health % will not change), then you can continue using it. Then you can even acknowledge the problems, to remove them from the text description and be notified about possible new issues / bad sectors only (as described on the above link) - but I'd use only with constant monitoring and backup upon any new problem/issue.
Re: Hard Drive with Rapidly Increasing Reallocates
Thanks for the reply!
I try to always perform a full surface test on each drive, before I start to use it. I usually use Victoria HDD/SSD for this purpose. But for this particular drive, I used Hard Disk Sentinel - Surface Test, right after I bought it in 2017. The test ran for 7 hours and reported 100% of blocks to have "Good" status (all light green). But it was only recently that I actually filled this drive with data. Do you think it would be a good idea to start running full write tests on new drives instead of read/verify, or even do them both?
After I noticed Reallocated Sectors/Events in S.M.A.R.T., I ran a "Verify" test in Victoria HDD/SSD, and it showed two bad blocks almost at the very end of the drive's surface area. Another 23 blocks were still readable, but slow.
After I cleaned the contacts and fully wiped the drive, I ran the surface test in Victoria again, albeit in "Read" mode this time (instead of "Verify"), and it finished with zero bad or weak blocks.
I try to always perform a full surface test on each drive, before I start to use it. I usually use Victoria HDD/SSD for this purpose. But for this particular drive, I used Hard Disk Sentinel - Surface Test, right after I bought it in 2017. The test ran for 7 hours and reported 100% of blocks to have "Good" status (all light green). But it was only recently that I actually filled this drive with data. Do you think it would be a good idea to start running full write tests on new drives instead of read/verify, or even do them both?
After I noticed Reallocated Sectors/Events in S.M.A.R.T., I ran a "Verify" test in Victoria HDD/SSD, and it showed two bad blocks almost at the very end of the drive's surface area. Another 23 blocks were still readable, but slow.
After I cleaned the contacts and fully wiped the drive, I ran the surface test in Victoria again, albeit in "Read" mode this time (instead of "Verify"), and it finished with zero bad or weak blocks.