Talk:Schur complement
From Wikipedia, the free encyclopedia
Should it be mentioned that Schur's complement is produced by Gauss-reducing the matrix M for all the pivots in D? That's what it is about isn't it, what linear regression does to the variance matrix - Martin Vermeer
[edit] "conditional variance" in the applications section
There, it is claimed that:
the conditional variance of X given Y is the Schur complement of C in V:
To me, is a function of the random variable Y; hence a random variable itself. I don't' see any Y-dependence above, so to me, the implication is that this function is constant for all values of Y; is this a consequence of the normality assumption? Btyner 22:18, 10 February 2006 (UTC)
-
- It's a consequence of joint normality. See multivariate normal distribution. Of course one can easily construct other -- non-normal examples in which the conditional variance does not depend on Y, but differs from the unconditional variance. But in a more general setting, the conditional variance given Y would depend on Y. Michael Hardy 01:19, 11 February 2006 (UTC)
- The definition used in the article is pretty standard. I can't see mention of any alternative in "The Schur Complement and Its Applications", although that goes into detail about the history and notation in Section 0.1. (I have taken the liberty of adding a heading to your comment. I hope you don't mind.) LachlanA (talk) 22:02, 19 January 2008 (UTC)
This article's definition implies both of the following: Let
so that M is a (p+q)×(p+q) matrix.
Then the Schur complement of the block D of the matrix M is the p×p matrix
and the Schur complement of the block A of the matrix M is the q×q matrix
I'll dig out Strang's book on Tuesday and see what it says. Michael Hardy (talk) 23:42, 19 January 2008 (UTC)
[edit] another feature
It'd be nice to setup Ax=b, for a 2x2 matrix, and then use the schur complement to show what the values of x_1 and x_2 are -- 131.215.105.118 (talk) 18:48, 16 November 2007 (UTC)