Entropy power inequality

From Wikipedia, the free encyclopedia

In mathematics, the entropy power inequality is a result in information theory that relates to so-called "entropy power" of random variables. It shows that the entropy power of suitably well-behaved random variables is a superadditive function. The entropy power inequality was proved in 1948 by Claude Shannon in his seminal paper "A Mathematical Theory of Communication". Shannon also provided a sufficient condition for equality to hold; Stam (1959) showed that the condition is in fact necessary.

Statement of the inequality

For a random variable X : Ω  Rn with probability density function f : Rn  R, the differential entropy of X, denoted h(X), is defined to be

h(X)=-\int _{{{\mathbb  {R}}^{{n}}}}f(x)\log f(x)\,dx

and the entropy power of X, denoted N(X), is defined to be

N(X)={\frac  {1}{2\pi e}}e^{{{\frac  {2}{n}}h(X)}}.

In particular, N(X) = |K| 1/n when X is normal distributed with covariance matrix K.

Let X and Y be independent random variables with probability density functions in the Lp space Lp(Rn) for some p > 1. Then

N(X+Y)\geq N(X)+N(Y).\,

Moreover, equality holds if and only if X and Y are multivariate normal random variables with proportional covariance matrices.

See also

References

This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.