The test relies on the fact that, given a dataset containing N integer values, the arithmetic mean (commonly called simply the average) is restricted to a few possible values: it must always be expressible as a fraction with an integer numerator and a denominator N. If the reported mean does not fit this description, there must be an error somewhere; the preferred term for such errors is "inconsistencies", to emphasise that their origin is, on first discovery, typically unknown.
The GRIM test was proposed by Nick Brown [fr] and James Heathers in 2016, following increased awareness of the replication crisis in some fields of science.
[2] However, it can be a sign that some data has been improperly excluded or that the mean has been illegitimately fudged in order to make the results appear more significant.
Multiple errors scattered throughout a table can be a sign of deeper problems, and other statistical tests can be used to analyze the suspect data.
[3] GRIM testing also played a significant role in uncovering errors in publications by Cornell University's Food and Brand Lab under Brian Wansink.