Linear probability model

Here the dependent variable for each observation takes values which are either 0 or 1.

The probability of observing a 0 or 1 in any one case is treated as depending on one or more explanatory variables.

The model assumes that, for a binary outcome (Bernoulli trial),

,[1] For this model, and hence the vector of parameters β can be estimated using least squares.

This method of fitting would be inefficient,[1] and can be improved by adopting an iterative scheme based on weighted least squares,[1] in which the model from the previous iteration is used to supply estimates of the conditional variances,

This approach can be related to fitting the model by maximum likelihood.

, the estimated coefficients can imply probabilities outside the unit interval

More formally, the LPM can arise from a latent-variable formulation (usually to be found in the econometrics literature[2]), as follows: assume the following regression model with a latent (unobservable) dependent variable: The critical assumption here is that the error term of this regression is a symmetric around zero uniform random variable, and hence, of mean zero.

But this is the Linear Probability Model, with the mapping This method is a general device to obtain a conditional probability model of a binary variable: if we assume that the distribution of the error term is logistic, we obtain the logit model, while if we assume that it is the normal, we obtain the probit model and, if we assume that it is the logarithm of a Weibull distribution, the complementary log-log model.