While the concept of Physics of Failure is common in many structural fields,[2] the specific branding evolved from an attempt to better predict the reliability of early generation electronic parts and systems.
Within the electronics industry, the major driver for the implementation of Physics of Failure was the poor performance of military weapon systems during World War II.
[5] Unfortunately, the rapid evolution of electronics, with new designs, new materials, and new manufacturing processes, tended to quickly negate approaches and predictions derived from older technology.
One of the first major successes under predictive physics of failure was a formula[9] developed by James Black of Motorola to describe the behavior of electromigration.
Black used this knowledge, in combination with experimental findings, to describe the failure rate due to electromigration as where A is a constant based on the cross-sectional area of the interconnect, J is the current density, Ea is the activation energy (e.g. 0.7 eV for grain boundary diffusion in aluminum), k is the Boltzmann constant, T is the temperature and n is a scaling factor (usually set to 2 according to Black).
Leveraging this success, additional physics-of-failure based algorithms have been derived for the three other major degradation mechanisms (time dependent dielectric breakdown [TDDB], hot carrier injection [HCI], and negative bias temperature instability [NBTI]) in modern integrated circuits (equations shown below).
In addition, some companies have so many use environments (think personal computers) that performing a PoF assessment for each potential combination of temperature / vibration / humidity / power cycling / etc.