Squeeze theorem

From Wikipedia, the free encyclopedia

In calculus, the squeeze theorem (also known as the pinching theorem or the sandwich theorem, sometimes the squeeze lemma) is a theorem regarding the limit of a function. The theorem asserts that if two functions approach the same limit at a point, and if a third function is "squeezed" ("pinched", "sandwiched") between those functions, then the third function also approaches that limit at that point.

The squeeze theorem is a technical result which is very important in proofs in calculus and mathematical analysis. It is typically used to confirm the limit of a function via comparison with two other functions whose limits are known or easily computed. It was first used geometrically by the mathematicians Archimedes and Eudoxus in an effort to compute π, and was formulated in modern terms by Gauss.

In Italian and Russian, the squeeze theorem is also known as the two carabinieri theorem or two militsioner theorem. The story is that if two police officers are holding a prisoner between them, and both the officers are going to the cells, the prisoner must also be going to the cells.

The sandwich/squeeze theorem has no relation to the ham sandwich theorem.

Contents

[edit] Statement

An example of a squeezed function
An example of a squeezed function

The squeeze theorem is formally stated as follows.

Let I be an interval containing the point a. Let f, g, and h be functions defined on I, except possibly at a itself. Suppose that for every x in I not equal to a, we have:

g(x) \leq f(x) \leq h(x)

and also suppose that:

\lim_{x \to a} g(x) = \lim_{x \to a} h(x) = L.

Then \lim_{x \to a} f(x) = L.

  • The functions g(x) and h(x) are said to be lower and upper bounds (respectively) of f(x).
  • Here a is not required to lie in the interior of I. Indeed, if a is an endpoint of I, then the above limits are left- or right-hand limits.
  • A similar statement holds for infinite intervals: for example, if I = (0, ∞), then the conclusion holds, taking the limits as x → ∞.

[edit] Proof

The main idea behind this proof is to consider the relative differences between the functions f, g, and h. This has the effect of making the lower bound identically 0, and all the functions non-negative. This greatly simplifies the details of the proof. The general case then follows algebraically.

To begin the proof, assume all the hypotheses and notation as given in the statement of the theorem above. We first prove the special case where g(x) = 0 for all x and L = 0. In this case:

\lim_{x \to a} h(x) = 0.

Let ε > 0 be any fixed positive number. By the definition of the limit of a function, there is a δ > 0 such that:

\mbox{if }0 < |x - a| < \delta, \mbox{ then }|h(x)| < \varepsilon.

For any x in I not equal to a:

0 = g(x) \leq f(x) \leq h(x)

so that:

|f(x)| \leq |h(x)|.

We conclude that:

\mbox{if }0 < |x - a| < \delta, \mbox{ then }|f(x)| \leq |h(x)| < \varepsilon.

This proves that:

\lim_{x \to a} f(x) = 0 = L.

This completes the proof for the special case. Now, we prove the general theorem by letting g and L be arbitrary. For any x in I not equal to a, we have:

g(x) \leq f(x) \leq h(x).

Subtracting g(x) from each expression:

0 \leq f(x) - g(x) \leq h(x) - g(x).

As x \rightarrow a, \, h(x) \rightarrow L and g(x) \rightarrow L, so that:

h(x) - g(x) \rightarrow L - L = 0.

The special case now shows that f(x) - g(x) \rightarrow 0. We conclude that:

f(x) = (f(x) - g(x)) + g(x) \rightarrow 0 + L = L.

This completes the proof. Q.E.D.

[edit] References

[edit] External links