Invoking Laplace's rule of succession, some authors have argued[citation needed] that α should be 1 (in which case the term add-one smoothing[2][3] is also used)[further explanation needed], though in practice a smaller value is typically chosen.
In the special case where the number of categories is 2, this is equivalent to using a beta distribution as the conjugate prior for the parameters of the binomial distribution.
Laplace came up with this smoothing technique when he tried to estimate the chance that the sun will rise tomorrow.
His rationale was that even given a large sample of days with the rising sun, we still can not be completely sure that the sun will still rise tomorrow (known as the sunrise problem).
[4] A pseudocount is an amount (not generally an integer, despite its name) added to the number of observed cases in order to change the expected probability in a model of those data, when not known to be zero.
It is so named because, roughly speaking, a pseudo-count of value
weighs into the posterior distribution similarly to each category having an additional count of
is but the posterior probability when additively smoothed is as if to increase each count
It may only be zero (or the possibility ignored) if impossible by definition, such as the possibility of a decimal digit of π being a letter, or a physical possibility that would be rejected and so not counted, such as a computer printing a letter when a valid program for π is run, or excluded and not counted because of no interest, such as if only interested in the zeros and ones.
Generally, there is also a possibility that no value may be computable or observable in a finite time (see the halting problem).
But at least one possibility must have a non-zero pseudocount, otherwise no prediction could be computed before the first observation.
The sum of the pseudocounts, which may be very large, represents the estimated weight of the prior knowledge compared with all the actual observations (one for each) when determining the expected probability.
In any observed data set or sample there is the possibility, especially with low-probability events and with small data sets, of a possible event not occurring.
This oversimplification is inaccurate and often unhelpful, particularly in probability-based machine learning techniques such as artificial neural networks and hidden Markov models.
The simplest approach is to add one to each observed number of events, including the zero-count possibilities.
This approach is equivalent to assuming a uniform prior distribution over the probabilities for each possible event (spanning the simplex where each probability is between 0 and 1, and they all sum to 1).
Using the Jeffreys prior approach, a pseudocount of one half should be added to each possible outcome.
Pseudocounts should be set to one only when there is no prior knowledge at all – see the principle of indifference.
However, given appropriate prior knowledge, the sum should be adjusted in proportion to the expectation that the prior probabilities should be considered correct, despite evidence to the contrary – see further analysis.
A more complex approach is to estimate the probability of the events from other factors and adjust accordingly.
One way to motivate pseudocounts, particularly for binomial data, is via a formula for the midpoint of an interval estimate, particularly a binomial proportion confidence interval.
standard deviations to approximate a 95% confidence interval (
) yields pseudocount of 2 for each outcome, so 4 in total, colloquially known as the "plus four rule": This is also the midpoint of the Agresti–Coull interval (Agresti & Coull 1998).
should be replaced by the known incidence rate of the control population
Additive smoothing is commonly a component of naive Bayes classifiers.
In a bag of words model of natural language processing and information retrieval, the data consists of the number of occurrences of each word in a document.
Additive smoothing allows the assignment of non-zero probabilities to words which do not occur in the sample.
Studies have shown that additive smoothing is more effective than other probability smoothing methods in several retrieval tasks such as language-model-based pseudo-relevance feedback and recommender systems.