Bayesian linear regression is a type of conditional modeling in which the mean of one variable is described by a linear combination of other variables, with the goal of obtaining the posterior probability of the regression coefficients (as well as other parameters describing the distribution of the regressand) and ultimately allowing the out-of-sample prediction of the regressand (often labelled
In this model, and under a particular choice of prior probabilities for the parameters—so-called conjugate priors—the posterior can be found analytically.
With more arbitrarily chosen priors, the posteriors generally have to be approximated.
Consider a standard linear regression problem, in which for
are independent and identically normally distributed random variables:
The ordinary least squares solution is used to estimate the coefficient vector using the Moore–Penrose pseudoinverse:
design matrix, each row of which is a predictor vector
This is a frequentist approach, and it assumes that there are enough measurements to say something meaningful about
In the Bayesian approach,[1] the data are supplemented with additional information in the form of a prior probability distribution.
The prior belief about the parameters is combined with the data's likelihood function according to Bayes theorem to yield the posterior belief about the parameters
The prior can take different functional forms depending on the domain and the information that is available a priori.
In fact, a "full" Bayesian analysis would require a joint likelihood
Only under the assumption of (weak) exogeneity can the joint likelihood be factored into
[2] The latter part is usually ignored under the assumption of disjoint parameter sets.
are considered chosen (for example, in a designed experiment) and therefore has a known probability without parameters.
In this section, we will consider a so-called conjugate prior for which the posterior distribution can be derived analytically.
In the notation introduced in the inverse-gamma distribution article, this is the density of an
Equivalently, it can also be described as a scaled inverse chi-squared distribution,
With the prior now specified, the posterior distribution can be expressed as
can be expressed in terms of the least squares estimator
which illustrates Bayesian inference being a compromise between the information contained in the prior and the information contained in the sample.
It is also known as the marginal likelihood, and as the prior predictive density.
Here, the model is defined by the likelihood function
This integral can be computed analytically and the solution is given in the following equation.
Because we have chosen a conjugate prior, the marginal likelihood can also be easily computed by evaluating the following equality for arbitrary values of
Note that this equation is nothing but a re-arrangement of Bayes theorem.
Inserting the formulas for the prior, the likelihood, and the posterior and simplifying the resulting expression leads to the analytic expression given above.
In general, it may be impossible or impractical to derive the posterior distribution analytically.
However, it is possible to approximate the posterior by an approximate Bayesian inference method such as Monte Carlo sampling,[7] INLA or variational Bayes.
A similar analysis can be performed for the general case of the multivariate regression and part of this provides for Bayesian estimation of covariance matrices: see Bayesian multivariate linear regression.