In statistics, consistency of procedures, such as computing confidence intervals or conducting hypothesis tests, is a desired property of their behaviour as the number of items in the data set to which they are applied increases indefinitely.
In complicated applications of statistics, there may be several ways in which the number of data items may grow.
In such cases, the property of consistency may be limited to one or more of the possible ways a sample size can grow.
[1] In statistical classification, a consistent classifier is one for which the probability of correct classification, given a training set, approaches, as the size of the training set increases, the best probability theoretically possible if the population distributions were fully known.
In contrast, an estimator or test which is not consistent may be difficult to justify in practice, since gathering additional data does not have the asymptotic guarantee of improving the quality of the outcome.