A hard disk failure may occur in the course of normal operation, or due to an external factor such as exposure to fire or water or high magnetic fields, or suffering a sharp impact or environmental contamination, which can lead to a head crash.
Therefore, even if a drive is subjected to several years of heavy daily use, it may not show any notable signs of wear unless closely inspected.
The former typically presents as a drive that can no longer be detected by CMOS setup, or that fails to pass BIOS POST so that the operating system never sees it.
Gradual hard-drive failure can be harder to diagnose, because its symptoms, such as corrupted data and slowing down of the PC (caused by gradually failing areas of the hard drive requiring repeated read attempts before successful access), can be caused by many other computer issues, such as malware.
A cyclical repetitive pattern of seek activity such as rapid or slower seek-to-end noises (click of death) can be indicative of hard drive problems.
Disks are designed such that either a spring or, more recently, rotational inertia in the platters is used to park the heads in the case of unexpected power loss.
Most manufacturers design the sliders to survive 50,000 contact cycles before the chance of damage on startup rises above 50%.
Load/unload technology relies on the heads being lifted off the platters into a safe location, thus eliminating the risks of wear and stiction altogether.
The first HDD RAMAC and most early disk drives used complex mechanisms to load and unload the heads.
Addressing shock robustness, IBM also created a technology for their ThinkPad line of laptop computers called the Active Protection System.
When a sudden, sharp movement is detected by the built-in accelerometer in the ThinkPad, internal hard disk heads automatically unload themselves to reduce the risk of any potential data loss or scratch defects.
Most major hard disk and motherboard vendors support S.M.A.R.T, which measures drive characteristics such as operating temperature, spin-up time, data error rates, etc.
Certain trends and sudden changes in these parameters are thought to be associated with increased likelihood of drive failure and data loss.
parameters affect failure probability, a large fraction of failed drives do not produce predictive S.M.A.R.T.
A 2007 study published by Google suggested very little correlation between failure rates and either high temperature or activity level.
[23] Modern helium-filled drives are completely sealed without a breather port, thus eliminating the risk of debris ingression, resulting in a typical MTBF of 2.5 million hours.
However, independent research indicates that MTBF is not a reliable estimate of a drive's longevity (service life).
[24] MTBF is conducted in laboratory environments in test chambers and is an important metric to determine the quality of a disk drive, but is designed to only measure the relatively constant failure rate over the service life of the drive (the middle of the "bathtub curve") before final wear-out phase.
The cloud storage company Backblaze produces an annual report into hard drive reliability.
It may be possible to recover data by opening the drives in a clean room and using appropriate equipment to replace or revitalize failed components.
[36][37][38] Sometimes operation can be restored for long enough to recover data, perhaps requiring reconstruction techniques such as file carving.