Bayesian multivariate linear regression
From Wikipedia, the free encyclopedia
Consider a collection of m linear regression problems for n observations, related through a set of common predictor variables {xc}, and a jointly normal errors {εc} :
where the subscript c denotes a column vector of k observations for each measurement (n = k + m).
The noise terms are jointly normal over each collection of k observations. That is, each row vector {r} represents an m length vector of correlated observations on each of the dependent variables:
where the noise εr is i.i.d. and normally distributed for all rows {r}.
where B is an matrix
We can write the entire regression problem in matrix form as:
where Y and E are matrices.
The classical, frequentists linear least squares solution is to simply estimate the matirx of regression coeeficients using the Moore-Penrose pseudoinverse:
- .
To obtain the Bayesian solution, we need to specify the confitional likelihood and then find the appropriate conjugate prior. As with the univerate case of linear Bayesian regression, we will find that we can specify a natural conditional conjugate prior (which is scale dependent).
Let us write our conditional likelihood as
writing the error E in terms Y,X, and B yields
We seek a natural conjugate prior—a joint density ρ(B,σε) which is of the same functional form as the likelihood. Since the likelihood is quadratic in B, we re-write the likelihood so it is normal in (the deviation from classical sample estimate)
Using the same technique as with linear Bayesian regression, we decompose the exponential term using a matrix-form of the sum-of-squares technique. Here, however, we will also need to use the Matrix Differential Calculus (Kronecker product and vectorization transformations).
First, let us apply sum-of-squares to obtain new expression for the likelihood:
We would like to develop a conditional form for the priors:
where ρ(Σε) is an inverse-Wishsart distribution and ρ(B | Σε) is some form of normal distribution in the matrix B. This is accomplished using the vectorization transformation, which converts the likelihood from a function of the matrices to a function of the vectors .
Write
Let
Then
which will lead to a likelihood which is normal in .
With the likelihood in a more tractable form, we can now find a natural (conditional) conjugate prior.
(to complete)
Example:
[edit] References
- Bradley P. Carlin and Thomas A. Louis, Bayes and Empirical Bayes Methods for Data Analysis, Chapman & Hall/CRC, Second edition 2000,
- Peter E. Rossi, Greg M. Allenby, and Robert McCulloch, Bayesian Statistics and Marketing, John Wiley & Sons, Ltd, 2006