Jensen's inequality

From Wikipedia, the free encyclopedia

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906[1]. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below.

Contents

[edit] Statements

The classical form of Jensen's inequality involves several numbers and weights. The inequality can be stated quite generally using measure theory, and can be further generalized to its full strength in a probabilistic setting.

[edit] Finite form

For a real convex function φ, numbers xi in its domain, and positive weights ai, Jensen's inequality can be stated as:

\varphi\left(\frac{\sum a_{i} x_{i}}{\sum a_{i}}\right) \le \frac{\sum a_{i} \varphi (x_{i})}{\sum a_{i}};

and the inequality is clearly reversed if φ is concave.

As a particular case, if the weights ai are all equal to unity, then

\varphi\left(\frac{\sum x_{i}}{n}\right) \le \frac{\sum \varphi (x_{i})}{n}.

For instance, the log(x) function is concave, so substituting \varphi(x)=\log(x) in the previous formula, this establishes the (logarithm of) the familiar arithmetic mean-geometric mean inequality:

\frac{x_1 + x_2 + \cdots + x_n}{n} \ge \sqrt[n]{x_1 x_2 \cdots x_n}.

The variable x may, if required, be a function of another variable (or set of variables) t, so that xi = g(ti). All of this carries directly over to the general continuous case: the weights ai are replaced by a non-negative integrable function f(x), such as a probability distribution, for example; and the summations replaced by integrals.

[edit] In measure-theoretic notation

Let (Ω,A,μ) be a measure space, such that μ(Ω) = 1. If g is a real-valued function that is μ-integrable, and if φ is a measurable convex function on the real axis, then:

\varphi\left(\int_{\Omega} g\, d\mu\right) \le \int_\Omega \varphi \circ g\, d\mu.

[edit] In probability-theory notation (real space)

The same result can be stated in a probability theory setting. Let (\Omega, \mathfrak{F},\mathbb{P}) be a probability space, X an integrable real-valued random variable and φ a measurable convex function. Then:

\varphi\left(\mathbb{E}\{X\}\right) \leq \mathbb{E}\{\varphi(X)\}.

In this probability setting, the measure μ is intended as a probability \mathbb{P}, the integral with respect to μ as an expected value \mathbb{E}, and the function g as a random variable X.

[edit] In probability-theory notation (general)

More generally, let T be a real topological vector space, and X a T-valued integrable random variable. In this general setting, integrable means that for any element z in the dual space of T: \mathbb{E}|\langle z, X \rangle|<\infty, and there exists an element \mathbb{E}\{X\} in T, such that \langle z, \mathbb{E}\{X\}\rangle=\mathbb{E}\{\langle z, X \rangle\}. Then, for any measurable convex function φ and any sub-σ-algebra \mathfrak{G} of \mathfrak{F}:

\varphi\left(\mathbb{E}\{X|\mathfrak{G}\}\right) \leq  \mathbb{E}\{\varphi(X)|\mathfrak{G}\}.

Here \mathbb{E}\{\cdot|\mathfrak{G} \} stands for the expectation conditioned to the σ-algebra \mathfrak{G}. This general statement reduces to the previous ones when the topological vector space T is the real axis, and \mathfrak{G} is the trivial σ-algebra \{\emptyset, \Omega\}.

[edit] Proofs

A graphical "proof" of Jensen's inequality for the probabilistic case. The dashed curve along the X axis is the hypothetical distribution of X, while the dashed curve along the Y axis is the corresponding distribution of Y values. Note that the convex mapping Y(X) increasingly "stretches" the distribution for increasing values of X.
A graphical "proof" of Jensen's inequality for the probabilistic case. The dashed curve along the X axis is the hypothetical distribution of X, while the dashed curve along the Y axis is the corresponding distribution of Y values. Note that the convex mapping Y(X) increasingly "stretches" the distribution for increasing values of X.

A proof of Jensen's inequality can be provided in several ways, and three different proofs corresponding to the different statements above will be offered. Before embarking on these mathematical derivations, however, it is worth analyzing an intuitive graphical argument based on the probabilistic case where X is a real number (see figure). Assuming a hypothetical distribution of X values, one can immediately identify the position of \mathbb{E}\{X\} and its image \varphi(\mathbb{E}\{X\}) in the graph. Noticing that for convex mappings Y=\varphi(X) the corresponding distribution of Y values is increasingly "stretched out" for increasing values of X, it is easy to see that the distribution of Y is broader than that of X in the interval corresponding to X > X0 and narrower in X < X0 for any X0; in particular, this is also true for X_0 = \mathbb{E}\{ X \}. Consequently, in this picture the expectation of Y will always shift upwards with respect to the position of \varphi(\mathbb{E}\{ X \} ), and this "proves" the inequality, i.e.

\mathbb{E}\{ Y(X) \} \geq Y(\mathbb{E}\{ X \} ),

the equality taking place when \varphi(X) is not strictly convex, e.g. when it is a straight line.

The proofs below formalize this intuitive notion.

[edit] Proof 1 (using the finite form)

If λ1 and λ2 are two arbitrary positive real numbers such that λ1 + λ2 = 1, then convexity of \varphi implies

\varphi(\lambda_1 x_1+\lambda_2 x_2)\leq \lambda_1\,\varphi(x_1)+\lambda_2\,\varphi(x_2) for any x_1,\,x_2.

This can be easily generalized: if \lambda_1,\,\lambda_2,\ldots,\lambda_n are n positive real numbers such that \lambda_1+\lambda_2+\cdots+\lambda_n=1, then

\varphi(\lambda_1 x_1+\lambda_2 x_2+\cdots\lambda_n x_n)\leq \lambda_1\,\varphi(x_1)+\lambda_2\,\varphi(x_2)+\cdots+\lambda_n\,\varphi(x_n),

for any x_1,\,x_2,\ldots,\,x_n. This finite form of the Jensen's inequality can be proved by induction: by convexity hypotheses, the statement is true for n = 2. Suppose it is true also for some n, one needs to prove it for n+1. At least one of the λi is strictly positive, say λ1; therefore by convexity inequality:

\varphi\left(\sum_{i=1}^{n+1}\lambda_i x_i\right)= \varphi\left(\lambda_1 x_1+(1-\lambda_1)\sum_{i=2}^{n+1} \frac{\lambda_i}{1-\lambda_1} x_i\right)\leq \lambda_1\,\varphi(x_1)+(1-\lambda_1) \varphi\left(\sum_{i=2}^{n+1}\left( \frac{\lambda_i}{1-\lambda_1} x_i\right)\right).

Since \sum_{i=2}^{n+1} \frac{\lambda_i}{1-\lambda_1} =1, one can apply the induction hypotheses to the last term in the previous formula to obtain the result, namely the finite form of the Jensen's inequality.

In order to obtain the general inequality from this finite form, one needs to use a density argument. The finite form can be re-written as:

\varphi\left(\int x\,d\mu_n(x) \right)\leq \int \varphi(x)\,d\mu_n(x),

where μn is a measure given by an arbitrary convex combination of Dirac deltas:

\mu_n=\sum_{i=1}^n \lambda_i \delta_{x_i}.

Since convex functions are continuous, and since convex combinations of Dirac deltas are weakly dense in the set of probability measures (as could be easily verified), the general statement is obtained simply by a limiting procedure.

[edit] Proof 2 (measure-theoretic notation)

Let g be a real-valued μ-integrable function on a measure space Ω, and let φ be a convex function on the real numbers. Define the right-handed derivative of φ at x as

\varphi^\prime(x):=\lim_{t\to0^+}\frac{\varphi(x+t)-\varphi(x)}{t}.

Since φ is convex, the quotient of the right-hand side is decreasing when t approaches 0 from the right, and bounded below by any term of the form

\frac{\varphi(x+t)-\varphi(x)}{t}

where t < 0, and therefore, the limit does always exist.

Now, let us define the following:

x_0:=\int_\Omega g\, d\mu,
a:=\varphi^\prime(x_0),
b:=\varphi(x_0)-x_0\varphi^\prime(x_0).

Then for all x, ax+b\leq\varphi(x). To see that, take x>x0, and define t = x − x0 > 0. Then,

\varphi^\prime(x_0)\leq\frac{\varphi(x_0+t)-\varphi(x_0)}{t}.

Therefore,

\varphi^\prime(x_0)(x-x_0)+\varphi(x_0)\leq\varphi(x)

as desired. The case for x < x0 is proven similarly, and clearly ax_0+b=\varphi(x_0).

φ(x0) can then be rewritten as

ax_0+b=a\left(\int_\Omega g\,d\mu\right)+b.

But since μ(Ω) = 1, then for every real number k we have

\int_\Omega k\,d\mu=k.

In particular,

a\left(\int_\Omega g\,d\mu\right)+b=\int_\Omega(ag+b)\,d\mu\leq\int_\Omega\varphi\circ g\,d\mu.

[edit] Proof 3 (general inequality in probabilistic notation)

Let X be an integrable random variable that takes value in a real topological vector space T. Since \varphi:T \mapsto \mathbb{R} is convex, for any x,y \in T, the quantity

\frac{\varphi(x+\theta\,y)-\varphi(x)}{\theta},

is decreasing as θ approaches 0 + . In particular, it is well defined the subdifferential of \varphi evaluated at x in the direction y, defined by:

(D\varphi)(x)\cdot y:=\lim_{\theta \downarrow 0} \frac{\varphi(x+\theta\,y)-\varphi(x)}{\theta}=\inf_{\theta \neq 0} \frac{\varphi(x+\theta\,y)-\varphi(x)}{\theta}.

It is easily seen that the subdifferential is linear in y and, since the infimum taken in the right-hand side of the previous formula is smaller than the value of the same term for θ = 1, one gets:

\varphi(x)\leq \varphi(x+y)-(D\varphi)(x)\cdot y.

In particular, for an arbitrary sub-σ-algebra \mathfrak{G} we can evaluate the last inequality when x=\mathbb{E}\{X|\mathfrak{G}\},\,y=X-\mathbb{E}\{X|\mathfrak{G}\} to obtain:

\varphi(\mathbb{E}\{X|\mathfrak{G}\})\leq \varphi(X)-(D\varphi)(\mathbb{E}\{X|\mathfrak{G}\})\cdot (X-\mathbb{E}\{X|\mathfrak{G}\}).

Now, if we take the expectation conditioned to \mathfrak{G} on both sides of the previous expression, we get the result since:

\mathbb{E}\{\left[(D\varphi)(\mathbb{E}\{X|\mathfrak{G}\})\cdot (X-\mathbb{E}\{X|\mathfrak{G}\})\right]|\mathfrak{G}\}=(D\varphi)(\mathbb{E}\{X|\mathfrak{G}\})\cdot \mathbb{E}\{ \left( X-\mathbb{E}\{X|\mathfrak{G}\} \right) |\mathfrak{G}\}=0,

by the linearity of the subdifferential in the y variable, and well-known properties of the conditional expectation.

[edit] Applications and special cases

[edit] Form involving a probability density function

Suppose Ω is a measurable subset of the real line and f(x) is a non-negative function such that

\int_{-\infty}^\infty f(x)\,dx = 1.

In probabilistic language, f is a probability density function.

Then Jensen's inequality becomes the following statement about convex integrals:

If g is any real-valued measurable function and φ is convex over the range of g, then

\varphi\left(\int_{-\infty}^\infty g(x)f(x)\, dx\right) \le \int_{-\infty}^\infty \varphi(g(x)) f(x)\, dx.

If g(x) = x, then this form of the inequality reduces to a commonly used special case:

\varphi\left(\int_{-\infty}^\infty x\, f(x)\, dx\right) \le \int_{-\infty}^\infty \varphi(x)\,f(x)\, dx.

[edit] Alternative finite form

If Ω is some finite set \{x_1,x_2,\ldots,x_n\}, and if μ is a counting measure on Ω, then the general form reduces to a statement about sums:

\varphi\left(\sum_{i=1}^{n} g(x_i)\lambda_i \right) \le \sum_{i=1}^{n} \varphi(g(x_i))\lambda_i,

provided that \lambda_1 + \lambda_2 + \cdots + \lambda_n = 1, \lambda_i \ge 0.

There is also an infinite discrete form.

[edit] Statistical physics

Jensen's inequality is of particular importance in statistical physics when the convex function is an exponential, giving:

e^{\langle X \rangle} \leq \left\langle e^X \right\rangle,

where angle brackets denote expected values with respect to some probability distribution in the random variable X.

The proof in this case is very simple (cf. Chandler, Sec. 5.5). The desired inequality follows directly, by writing

\left\langle e^X \right\rangle = e^{\langle X \rangle} \left\langle e^{X - \langle X \rangle} \right\rangle

and then applying the inequality

e^X \geq 1+X \,

to the final exponential.

[edit] Information theory

If p(x) is the true probability distribution for x, and q(x) is another distribution, then applying Jensen's inequality for the random variable Y(x) = q(x)/p(x) and the function φ(y) = −log(y) gives

\Bbb{E}\{\varphi(Y)\} \ge \varphi(\Bbb{E}\{Y\})
\Rightarrow  \int p(x) \log \frac{p(x)}{q(x)}dx  \ge  - \log \int p(x) \frac{q(x)}{p(x)}dx
\Rightarrow \int p(x) \log \frac{p(x)}{q(x)}dx \ge 0
\Rightarrow  - \int p(x) \log q(x) \ge - \int p(x) \log p(x),

a result called Gibbs' inequality.

It shows that the average message length is minimised when codes are assigned on the basis of the true probabilities p rather than any other distribution q. The quantity that is greater than zero is called the Kullback-Leibler distance of q from p.

[edit] Rao-Blackwell theorem

Main article: Rao-Blackwell theorem

If L is a convex function, then from Jensen's inequality we get

L(\Bbb{E}\{\delta(X)\}) \le \Bbb{E}\{L(\delta(X))\} \quad \Rightarrow \quad \Bbb{E}\{L(\Bbb{E}\{\delta(X)\})\} \le \Bbb{E}\{L(\delta(X))\}.

So if δ(X) is some estimator of an unobserved parameter θ given a vector of observables X; and if T(X) is a sufficient statistic for θ; then an improved estimator, in the sense of having a smaller expected loss L, can be obtained by calculating

\delta_{1}(X) = \Bbb{E}_{\theta}\{\delta(X') \,|\, T(X')= T(X)\},

the expected value of δ with respect to θ, taken over all possible vectors of observations X compatible with the same value of T(X) as that observed.

This result is known as the Rao-Blackwell theorem.

[edit] See also

  • Flaw of averages

[edit] References

[edit] External links

[edit] Footnotes

  1. ^ Jensen, J. Sur les fonctions convexes et les inégalités entre les valeurs moyennes.