Poisson regression

[1] Poisson regression assumes the response variable Y has a Poisson distribution, and assumes the logarithm of its expected value can be modeled by a linear combination of unknown parameters.

Negative binomial regression is a popular generalization of Poisson regression because it loosens the highly restrictive assumption that the variance is equal to the mean made by the Poisson model.

The traditional negative binomial regression model is based on the Poisson-gamma mixture distribution.

Poisson regression models are generalized linear models with the logarithm as the (canonical) link function, and the Poisson distribution function as the assumed probability distribution of the response.

is a vector of independent variables, then the model takes the form where

is now an (n + 1)-dimensional vector consisting of n independent variables concatenated to the number one.

The maximum-likelihood estimates lack a closed-form expression and must be found by numerical methods.

The probability surface for maximum-likelihood Poisson regression is always concave, making Newton–Raphson or other gradient-based methods appropriate estimation techniques.

The average partial effect in the Poisson model for a continuous

Given a set of parameters θ and an input vector x, the mean of the predicted Poisson distribution, as stated above, is given by and thus, the Poisson distribution's probability mass function is given by Now suppose we are given a data set consisting of m vectors

To do this, the equation is first rewritten as a likelihood function in terms of θ: Note that the expression on the right hand side has not actually changed.

Therefore, given that we are only interested in finding the best value for θ we may drop the yi!

and simply write To find a maximum, we need to solve an equation

, is a convex function, and so standard convex optimization techniques such as gradient descent can be applied to find the optimal value of θ. Poisson regression may be appropriate when the dependent variable is a count, for instance of events such as the arrival of a telephone call at a call centre.

[3] The events must be independent in the sense that the arrival of one call will not make another more or less likely, but the probability per unit time of events is understood to be related to covariates such as time of day.

[4] For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the number of species per unit area.

This logged variable, log(exposure), is called the offset variable and enters on the right-hand side of the equation with a parameter estimate (for log(exposure)) constrained to 1. which implies Offset in the case of a GLM in R can be achieved using the offset() function: A characteristic of the Poisson distribution is that its mean is equal to its variance.

In certain circumstances, it will be found that the observed variance is greater than the mean; this is known as overdispersion and indicates that the model is not appropriate.

A common reason is the omission of relevant explanatory variables, or dependent observations.

Under some circumstances, the problem of overdispersion can be solved by using quasi-likelihood estimation or a negative binomial distribution instead.

[5][6] Ver Hoef and Boveng described the difference between quasi-Poisson (also called overdispersion with quasi-likelihood) and negative binomial (equivalent to gamma-Poisson) as follows: If E(Y) = μ, the quasi-Poisson model assumes var(Y) = θμ while the gamma-Poisson assumes var(Y) = μ(1 + κμ), where θ is the quasi-Poisson overdispersion parameter, and κ is the shape parameter of the negative binomial distribution.

For both models, parameters are estimated using iteratively reweighted least squares.

With large μ and substantial extra-Poisson variation, the negative binomial weights are capped at 1/κ.

Ver Hoef and Boveng discussed an example where they selected between the two by plotting mean squared residuals vs. the mean.

An example would be the distribution of cigarettes smoked in an hour by members of a group where some individuals are non-smokers.

On the contrary, underdispersion may pose an issue for parameter estimation.

When estimating the parameters for Poisson regression, one typically tries to find values for θ that maximize the likelihood of an expression of the form where m is the number of examples in the data set, and

is the probability mass function of the Poisson distribution with the mean set to

Regularization can be added to this optimization problem by instead maximizing[9] for some positive constant

This technique, similar to ridge regression, can reduce overfitting.