Information theory and measure theory

From Wikipedia, the free encyclopedia

Contents

[edit] Entropy as a "measure"

There is an analogy between Shannon's basic "measures" of the information content of random variables and a measure over sets. Namely the joint entropy, conditional entropy, and mutual information can be considered as the measure of a set union, set difference, and set intersection, respectively.

If we associate the existence of abstract sets \tilde X and \tilde Y to arbitrary discrete random variables X and Y, somehow representing the information borne by X and Y, respectively, such that:

  • \mu(\tilde X \cap \tilde Y) = 0 whenever X and Y are unconditionally independent, and
  • \tilde X = \tilde Y whenever X and Y are such that either one is completely determined by the other (i.e. by a bijection);

where μ is a signed measure over these sets, and we set:

H(X) = \mu(\tilde X),
H(Y) = \mu(\tilde Y),
H(X,Y) = \mu(\tilde X \cup \tilde Y),
H(X|Y) = \mu(\tilde X \,\backslash\, \tilde Y),
I(X;Y) = \mu(\tilde X \cap \tilde Y);

we find that Shannon's "measure" of information content satisfies all the postulates and basic properties of a formal measure over sets. This can be a handy mnemonic device in some situations, e.g.

H(X,Y)=H(X)+H(Y|X)\, \mu(\tilde X \cup \tilde Y)=\mu(\tilde X)+\mu(\tilde Y \,\backslash\, \tilde X)
I(X;Y)=H(X)-H(X|Y)\, \mu(\tilde X \cap \tilde Y)=\mu(\tilde X)-\mu(\tilde X \,\backslash\, \tilde Y)

Because the entropy, joint entropy, conditional entropy, and bivariate mutual information of discrete random variables are all nonnegative, many basic inequalities of information theory (among no more than two random variables) can be derived from this formulation by considering the measure μ to be nonnegative.

[edit] Multivariate mutual information

Certain extensions to the definitions of Shannon's basic measures of information are necessary to deal with the σ-algebra generated by the sets that would be associated to three or more arbitrary random variables. (See Reza pp. 106-108 for an informal but rather complete discussion.) Namely H(X,Y,Z,\cdots) needs to be defined in the obvious way as the entropy of a joint distribution, and a multivariate mutual information I(X;Y;Z;\cdots) defined in a suitable manner so that we can set:

H(X,Y,Z,\cdots) = \mu(\tilde X \cup \tilde Y \cup \tilde Z \cup \cdots),
I(X;Y;Z;\cdots) = \mu(\tilde X \cap \tilde Y \cap \tilde Z \cap \cdots);

in order to define the (signed) measure over the whole σ-algebra. There is no single universally accepted definition for the mutivariate mutual information, but the one that corresponds here to the measure of a set intersection is due to Fano. The definition is recursive. As a base case the mutual information of a single random variable is defined to be its entropy: I(X) = H(X). Then for n\geq 2 we set

I(X_1; \cdots; X_n) = I(X_1; \cdots; X_{n-1}) - I(X_1; \cdots; X_{n-1}|X_n),

where the conditional mutual information is defined as

I(X_1; \cdots; X_{n-1}|X_n) = \mathbb E_{X_n}I((X_1|X_n); \cdots; (X_{n-1}|X_n)).

The first step in the recursion yields Shannon's definition I(X1;X2) = H(X1) − H(X1 | X2). It is interesting to note that the mutual information of three or more random variables can be negative as well as positive: Let X and Y be two independent fair coin flips, and let Z be their exclusive or. Then I(X;Y;Z) = − 1 bit.

Many other variations are possible for three or more random variables: for example, I(X,Y;Z) is the mutual information of the joint distribution of X and Y relative to Z, and can be interpreted as \mu((\tilde X \cup \tilde Y) \cap \tilde Z). Many more complicated expressions can be built this way, and still have meaning, e.g. I(X,Y;Z | W), or H(X,Z | W,Y).

[edit] Other measures in information theory

Many of the formulas in information theory have separate versions for continuous and discrete cases, i.e. integrals for the continuous case and sums for the discrete case. These formulas can often be generalized and unified using the apparatus of measure theory. For discrete random variables, probability mass functions can be considered density functions with respect to the counting measure. When the integration is carried out accordingly, it amounts to ordinary discrete summation. In this way the same integral can be used for both discrete and continuous cases. Consider the formula for the differential entropy of a continuous random variable with probability density function f(x):

h(X) = -\int_X f(x) \log f(x) \,dx,

This can be usually be taken to be

h(X) = -\int_X f(x) \log f(x) \,d\mu(x),

where μ is the Lebesgue measure. But if instead, X is discrete, f is a probability mass function, and ν is the counting measure, we can write:

H(X) = -\int_X f(x) \log f(x) \,d\nu(x) = -\sum_{x\in X} f(x) \log f(x).

The integral is identical to the continuous case except that a different measure is used. In both cases the probability density function f is the Radon–Nikodym derivative of the probability measure with respect to the measure against which the integral is taken.

If \mathbb P is the probability measure on X, then the integral can also be taken directly with respect to \mathbb P:

h(X) = -\int_X \log \frac{\mathrm d \mathbb P}{\mathrm d\mu} \,d\mathbb P,

If instead of the underlying measure μ we take another probability measure \mathbb Q, we are led to the Kullback–Leibler divergence: let \mathbb P and \mathbb Q be probability measures over the same space. Then if \mathbb P is absolutely continuous with respect to \mathbb Q, written \mathbb P << \mathbb Q, the Radon–Nikodym derivative \frac{\mathrm d\mathbb P}{\mathrm d\mathbb Q} exists and the Kullback–Leibler divergence can be expressed in its full generality:

D_\mathrm{KL}(\mathbb P \| \mathbb Q)    = \int_{\mathrm{supp}\mathbb P}      \frac{\mathrm d\mathbb P}{\mathrm d\mathbb Q}      \log \frac{\mathrm d\mathbb P}{\mathrm d\mathbb Q}      \,d \mathbb Q    = \int_{\mathrm{supp}\mathbb P}      \log \frac{\mathrm d\mathbb P}{\mathrm d\mathbb Q}        \,d \mathbb P,

where the integral runs over the support of \mathbb P. Note that we have dropped the negative sign: the Kullback–Leibler divergence is always non-negative due to Gibbs' inequality.

[edit] References

  • Fazlollah M. Reza. An Introduction to Information Theory. New York: McGraw-Hill 1961. New York: Dover 1994. ISBN 0-486-68210-2
  • Sunil Srinivasa. A Review on Multivariate Mutual Information. Notre Dame EE-80653 Information Theory Tutorials, Fall 2005. PDF.