Bayes linear

From Wikipedia, the free encyclopedia

Bayes linear is a subjectivist statistical methodology and framework. Traditional subjective Bayesian analysis is based upon fully specified probability distributions, which are very difficult to specify at the necessary level of detail. Bayes linear attempts to solve this problem by developing theory and practise for using partially specified probability models. Bayes linear in its current form has been primarilly developed by Michael Goldstein. Mathematically and philisophically it extends Bruno de Finetti's subjective theory of probability.

As the probability model is only partially specified in Bayes Linear it is not possible to calculate conditional probability by Bayes' rule. Instead Bayes linear suggests the calculation of an Adjusted Expectation.

To conduct a Bayes Linear analysis it is necessary to identify some values that you expect to know shortly by making measurements D and some future value which you would like to know B. Here D refers to a vector containing data and B to a vector containing quantities you would like to predict. For the following example B and D are taken to be two-dimensional vectors i.e.

B = (Y_1,Y_2),~ D = (X_1,X_2)

In order to specify a Bayes Linear model it is necessary to supply expectations for the vectors B and D, and to also specify the correlation between each member of B and each member of D.

For example the expectations are specified as:

E(Y_1)=5,~E(Y_2)=3,~E(X_1)=5,~E(X_2)=3

and the covariance matrix is specified as :


\begin{matrix}
    &    X_1    &    X_2    &    Y_1    &    Y_2     \\
X_1 &      1    &    u      &    \gamma    &    \gamma     \\
X_2 &      u    &    1      &    \gamma    &    \gamma     \\
Y_1 &      \gamma  &    \gamma    &    1      &    v       \\
Y_2 &      \gamma  &    \gamma    &    v      &    1       \\
\end{matrix}.

The repetition in this matrix, has some interesting implications to be discussed shortly.

An adjusted expectation is a linear estimator of the form

c0 + c1X1 + c2X2

where c0,c1 and c2 are chosen to minimise the prior expected loss for the observations i.e. Y1,Y2 in this case. That is for Y1

E([Y_1 - c_0 - c_1X_1 - c_2X_2]^2)\,

where

c_0, c_1, c_2\,

are chosen in order to minimise the prior expected loss in estimating Y1

In general the adjusted expectation is calculated with

E_D(X) = \sum^k_{i=0} h_iD_i

Setting h_0, \dots, h_k to minimise

E\left(\left[X-\sum^k_{i=0}h_iD_i\right]^2\right)

From a proof provided in (Goldstein and Wooff 2007) it can be shown that:

ED(X) = E(X) + Cov(X,D)Var(D) − 1(DE(D))

For the case where Var(D) is not invertible the Moore-Penrose pseudoinverse should be used instead.

[edit] External links and references

  • Bayes Linear Statistics, Theory & Methods, Michael Goldstein, David Wooff, Wiley 2007