Rényi entropy

From Wikipedia, the free encyclopedia

In information theory, the Rényi entropy generalizes the Shannon entropy, the Hartley entropy, the min-entropy, and the collision entropy. Entropies quantify the diversity, uncertainty, or randomness of a system. The Rényi entropy is named after Alfréd Rényi.[1]

The Rényi entropy is important in ecology and statistics as indices of diversity. The Rényi entropy is also important in quantum information, where it can be used as a measure of entanglement. In the Heisenberg XY spin chain model, the Rényi entropy as a function of α can be calculated explicitly by virtue of the fact that it is an automorphic function with respect to a particular subgroup of the modular group.[2][3] In theoretical computer science, the min-entropy is used in the context of randomness extractors.

Definition

The Rényi entropy of order \alpha , where \alpha \geq 0 and \alpha \neq 1, is defined as

H_{\alpha }(X)={\frac  {1}{1-\alpha }}\log {\Bigg (}\sum _{{i=1}}^{n}p_{i}^{\alpha }{\Bigg )} .[1]

Here, X is a discrete random variable with possible outcomes 1,2,...,n and corresponding probabilities p_{i}\doteq \Pr(X=i) for i=1,\dots ,n, and the logarithm is base 2. If the probabilities are p_{i}=1/n for all i=1,\dots ,n, then all the Rényi entropies of the distribution are equal: H_{\alpha }(X)=\log n. In general, for all discrete random variables X, H_{\alpha }(X) is a non-increasing function in \alpha .

Applications often exploit the following relation between the Rényi entropy and the p-norm:

H_{\alpha }(X)={\frac  {\alpha }{1-\alpha }}\log \left(\|X\|_{\alpha }\right) .

Here, the discrete probability distribution X is interpreted as a vector in \mathbb{R} ^{n} with X_{i}=p_{i}\geq 0 and \sum _{{i=1}}^{{n}}X_{i}=1.

The Rényi entropy for any \alpha \geq 0 is Schur concave.

Special cases of the Rényi entropy

As \alpha approaches zero, the Rényi entropy increasingly weighs all possible events more equally, regardless of their probabilities. In the limit for \alpha \to 0, the Rényi entropy is just the logarithm of the size of the support of X. The limit for \alpha \to 1 equals the Shannon entropy, which has special properties. As \alpha approaches infinity, the Rényi entropy is increasingly determined by the events of highest probability.

Hartley entropy

Provided the probabilities are nonzero,[4] H_{0} is the logarithm of the cardinality of X, sometimes called the Hartley entropy of X:

H_{0}(X)=\log n=\log |X|.\,

Shannon entropy

In the limit \alpha \rightarrow 1, it follows immediately that H_{\alpha } converges to the Shannon entropy:[5]

H_{1}(X)=-\sum _{{i=1}}^{n}p_{i}\log p_{i}.

Collision entropy

Collision entropy, sometimes just called "Rényi entropy," refers to the case \alpha =2,

H_{2}(X)=-\log \sum _{{i=1}}^{n}p_{i}^{2}=-\log P(X=Y)

where X and Y are independent and identically distributed.

Min-entropy

In the limit as \alpha \rightarrow \infty , the Rényi entropy H_{\alpha } converges to the min-entropy H_{\infty }:

H_{\infty }(X)\doteq \min _{{i=1}}^{n}(-\log p_{i})=-(\max _{i}\log p_{i})=-\log \max _{i}p_{i}\,.

Equivalently, the min-entropy H_{\infty }(X) is the largest real number b such that all events occur with probability at most 2^{{-b}}.

The name min-entropy stems from the fact that it is the smallest entropy measure in the family of Rényi entropies. In this sense, it is the strongest way to measure the information content of a discrete random variable. In particular, the min-entropy is never larger than the Shannon entropy.

The min-entropy has important applications for randomness extractors in theoretical computer science: Extractors are able to extract randomness from random sources that have a large min-entropy; merely having a large Shannon entropy does not suffice for this task.

Inequalities between different values of α

That H_{\alpha } is non-increasing in \alpha , which can be proven by differentiation,[6] as

-{\frac  {dH_{\alpha }}{d\alpha }}=-{\frac  {1}{(1-\alpha )^{2}}}\sum _{{i=1}}^{n}z_{i}\log(z_{i}/p_{i}),

which is proportional to Kullback–Leibler divergence (which is always non-negative), where z_{i}=p_{i}^{\alpha }/\sum _{{j=1}}^{n}p_{j}^{\alpha }.

In particular cases inequalities can be proven also by Jensen's inequality:

\log n=H_{0}\geq H_{1}\geq H_{2}\geq H_{\infty } .,[7][8]

For values of \alpha >1, inequalities in the other direction also hold. In particular, we have

H_{2}\leq 2H_{\infty } .[9][citation needed]

On the other hand, the Shannon entropy H_{1} can be arbitrarily high for a random variable X that has a constant min-entropy.[citation needed]

Rényi divergence

As well as the absolute Rényi entropies, Rényi also defined a spectrum of divergence measures generalising the Kullback–Leibler divergence.

The Rényi divergence of order α, where α > 0, of a distribution P from a distribution Q is defined to be:

D_{\alpha }(P\|Q)={\frac  {1}{\alpha -1}}\log {\Bigg (}\sum _{{i=1}}^{n}{\frac  {p_{i}^{\alpha }}{q_{i}^{{\alpha -1}}}}{\Bigg )}={\frac  {1}{\alpha -1}}\log \sum _{{i=1}}^{n}p_{i}^{\alpha }q_{i}^{{1-\alpha }}.\,

Like the Kullback-Leibler divergence, the Rényi divergences are non-negative for α>0. This divergence is also known as the alpha-divergence (\alpha -divergence).

Some special cases:

D_{0}(P\|Q)=-\log Q(\{i:p_{i}>0\}) : minus the log probability under Q that pi>0;
D_{{1/2}}(P\|Q)=-2\log \sum _{{i=1}}^{n}{\sqrt  {p_{i}q_{i}}} : minus twice the logarithm of the Bhattacharyya coefficient;
D_{1}(P\|Q)=\sum _{{i=1}}^{n}p_{i}\log {\frac  {p_{i}}{q_{i}}} : the Kullback-Leibler divergence;
D_{2}(P\|Q)=\log {\Big \langle }{\frac  {p_{i}}{q_{i}}}{\Big \rangle }\, : the log of the expected ratio of the probabilities;
D_{\infty }(P\|Q)=\log \sup _{i}{\frac  {p_{i}}{q_{i}}} : the log of the maximum ratio of the probabilities.

Why α=1 is special

The value α = 1, which gives the Shannon entropy and the Kullback–Leibler divergence, is special because it is only at α=1 that the chain rule of conditional probability holds exactly:

H(A,X)=H(A)+{\mathbb  {E}}_{{a\sim A}}{\big [}H(X|A=a){\big ]}

for the absolute entropies, and

D_{{\mathrm  {KL}}}(p(x|a)p(a)||m(x,a))=D_{{\mathrm  {KL}}}(p(a)||m(a))+{\mathbb  {E}}_{{p(a)}}\{D_{{\mathrm  {KL}}}(p(x|a)||m(x|a))\},

for the relative entropies.

The latter in particular means that if we seek a distribution p(x,a) which minimizes the divergence from some underlying prior measure m(x,a), and we acquire new information which only affects the distribution of a, then the distribution of p(x|a) remains m(x|a), unchanged.

The other Rényi divergences satisfy the criteria of being positive and continuous; being invariant under 1-to-1 co-ordinate transformations; and of combining additively when A and X are independent, so that if p(A,X) = p(A)p(X), then

H_{\alpha }(A,X)=H_{\alpha }(A)+H_{\alpha }(X)\;

and

D_{\alpha }(P(A)P(X)\|Q(A)Q(X))=D_{\alpha }(P(A)\|Q(A))+D_{\alpha }(P(X)\|Q(X)).

The stronger properties of the α = 1 quantities, which allow the definition of conditional information and mutual information from communication theory, may be very important in other applications, or entirely unimportant, depending on those applications' requirements.

Exponential families

The Rényi entropies and divergences for an exponential family admit simple expressions (Nielsen & Nock, 2011)

H_{\alpha }(p_{F}(x;\theta ))={\frac  {1}{1-\alpha }}\left(F(\alpha \theta )-\alpha F(\theta )+\log E_{p}[e^{{(\alpha -1)k(x)}}]\right)

and

D_{\alpha }(p:q)={\frac  {J_{{F,\alpha }}(\theta :\theta ')}{1-\alpha }}

where

J_{{F,\alpha }}(\theta :\theta ')=\alpha F(\theta )+(1-\alpha )F(\theta ')-F(\alpha \theta +(1-\alpha )\theta ')

is a Jensen difference divergence.

See also

Notes

  1. 1.0 1.1 Rényi (1961)
  2. Franchini (2008)
  3. Its (2010)
  4. RFC 4086, page 6
  5. Bromiley, Thacker & Bouhova-Thacker (2004)
  6. Beck (1993)
  7. H_{1}\geq H_{2} holds because \sum \limits _{{i=1}}^{M}{p_{i}\log p_{i}}\leq \log \sum \limits _{{i=1}}^{M}{p_{i}^{2}}.
  8. H_{\infty }\leq H_{2} holds because \log \sum \limits _{{i=1}}^{n}{p_{i}^{2}}\leq \log \sup _{i}p_{i}\left({\sum \limits _{{i=1}}^{n}{p_{i}}}\right)=\log \sup p_{i}.
  9. H_{2}\leq 2H_{\infty } holds because \log \sum \limits _{{i=1}}^{n}{p_{i}^{2}}\geq \log \sup _{i}p_{i}^{2}=2\log \sup _{i}p_{i}

References

  • Beck, Christian; Schlögl, Friedrich (1993). Thermodynamics of chaotic systems: an introduction. Cambridge University Press. ISBN 0521433673. 
  • Nielsen, F.; Boltz, S. (2010). "The Burbea-Rao and Bhattacharyya centroids". arXiv:1004.5049.
  • Rosso, O.A., "EEG analysis using wavelet-based information tools", Journal of Neuroscience Methods, 153 (2006) 163–182.
This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.