Bayes linear analysis attempts to solve this problem by developing theory and practise for using partially specified probability models.
Bayes linear in its current form has been primarily developed by Michael Goldstein.
Mathematically and philosophically it extends Bruno de Finetti's Operational Subjective approach to probability and statistics.
The first step to such an analysis is to determine a person's subjective probabilities e.g. by asking about their betting behaviour for each of these outcomes.
When we learn D conditional probabilities for B are determined by the application of Bayes' rule.
Practitioners of subjective Bayesian statistics routinely analyse datasets where the size of this set is large enough that subjective probabilities cannot be meaningfully determined for every element of D × B.
This is normally accomplished by assuming exchangeability and then the use of parameterized models with prior distributions over parameters and appealing to the de Finetti's theorem to justify that this produces valid operational subjective probabilities over D × B.
The difficulty with such an approach is that the validity of the statistical analysis requires that the subjective probabilities are a good representation of an individual's beliefs however this method results in a very precise specification over D × B and it is often difficult to articulate what it would mean to adopt these belief specifications.
In contrast to the traditional Bayesian paradigm Bayes linear statistics following de Finetti uses Prevision or subjective expectation as a primitive, probability is then defined as the expectation of an indicator variable.
where are chosen in order to minimise the prior expected loss in estimating
to minimise From a proof provided in (Goldstein and Wooff 2007) it can be shown that: For the case where Var(D) is not invertible the Moore–Penrose pseudoinverse should be used instead.