The b-value decrease observed prior to the failure of samples deformed in the laboratory[10] has led to the suggestion that this is a precursor to major macro-failure.
[11] Statistical physics provides a theoretical framework for explaining both the steadiness of the Gutenberg–Richter law for large catalogs and its evolution when the macro-failure is approached, but application to earthquake forecasting is currently out of reach.
[12] Alternatively, a b-value significantly different from 1.0 may suggest a problem with the data set; e.g. it is incomplete or contains errors in calculating magnitude.
This may in large part be caused by incompleteness of any data set due to the inability to detect and characterize small events.
That is, many low-magnitude earthquakes are not catalogued because fewer stations detect and record them due to decreasing instrumental signal to noise levels.
Among these is the one released by Oscar Sotolongo-Costa and A. Posadas in 2004,[14] of which R. Silva et al. presented the following modified form in 2006,[15] where N is the total number of events, a is a proportionality constant and q represents the non-extensivity parameter introduced by Constantino Tsallis to characterize systems not explained by the Boltzmann–Gibbs statistical form for equilibrium physical systems.
[17] In this model, values of parameter b were found for events recorded in Central Atlantic, Canary Islands, Magellan Mountains and the Sea of Japan.