It is named after the media and market researcher Tony Twyman and has been described as one of the most important laws of data analysis.
[2][3][4] The law is based on the fact that errors in data measurement and analysis can lead to observed quantities that are wildly different from typical values.
These errors are usually more common than real changes of similar magnitude in the underlying process being measured.
For example, if an analyst at a software company notices that the number of users has doubled overnight, the most likely explanation is a bug in logging, rather than a true increase in users.
[3] The law can also be extended to situations where the underlying data is influenced by unexpected factors that differ from what was intended to be measured.