Entropy power inequality
From Wikipedia, the free encyclopedia
In mathematics, the entropy power inequality is a result in probability theory that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by Claude Shannon in his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary.
[edit] Statement of the inequality
For a random variable X : Ω → Rn with probability density function f : Rn → R, the information entropy of X, denoted h(X), is defined to be
and the entropy power of X, denoted N(X), is defined to be
In particular,N(X) = |K| 1/n when X ~ ΦK.
Let X and Y be independent random variables with probability density functions in the Lp space Lp(Rn) for some p > 1. Then
Moreover, equality holds if and only if X and Y are multivariate normal random variables with proportional covariance matrices.
[edit] References
- Dembo, Amir; Cover, Thomas M. and Thomas, Joy A. (1991). "Information-theoretic inequalities". IEEE Trans. Inform. Theory 37 (6): 1501–1518. doi: . ISSN 0018-9448. MR1134291
- Gardner, Richard J. (2002). "The Brunn-Minkowski inequality". Bull. Amer. Math. Soc. (N.S.) 39 (3): 355–405 (electronic). doi: .
- Shannon, Claude E. (1948). "A mathematical theory of communication". Bell System Tech. J. 27: 379–423, 623–656.
- Stam, A.J. (1959). "Some inequalities satisfied by the quantities of information of Fisher and Shannon". Information and Control 2: 101–112. doi: .