Integration by substitution

From Wikipedia, the free encyclopedia

Topics in calculus

Fundamental theorem
Limits of functions
Continuity
Vector calculus
Matrix calculus
Mean value theorem

Differentiation

Product rule
Quotient rule
Chain rule
Implicit differentiation
Taylor's theorem
Related rates
List of differentiation identities

Integration

Lists of integrals
Improper integrals
Integration by:
parts, disks, cylindrical
shells
, substitution,
trigonometric substitution,
partial fractions, changing order

In calculus, the substitution rule is a tool for finding antiderivatives and integrals. Using the fundamental theorem of calculus often requires finding an antiderivative. For this and other reasons, the substitution rule is a relatively important tool for mathematicians. It is the counterpart to the chain rule of differentiation.

Let I \subseteq \mathbb{R} be a real interval and g : [a,b] \to I a continuously differentiable function. Suppose that f : I \to \mathbb{R} is a continuous function. Then


\int_a^b f(g(t))g'(t)\, dt = \int_{g(a)}^{g(b)} f(x)\,dx.

The formula is best remembered using Leibniz notation: the substitution x = g(t) yields dx / dt = g'(t) and thus formally dx = g'(t)\,dt, which is precisely the required substitution for dx. (In fact, one may view the substitution rule as a major justification of Leibniz's notation for integrals and derivatives.)

The formula is used to transform one integral into another integral that is easier to compute. Thus, the formula can be used from left to right or from right to left in order to simplify a given integral. When used in the latter manner, it is sometimes known as u-substitution.

Contents

[edit] Proof of the substitution rule

We will now prove the substitution rule for the Riemann integral. For this, let f and g be functions satisfying the above hypotheses. Then, since f, g and g' are all continuous, so is the function t \mapsto f(g(t))g'(t) : [a,b] \to \mathbb{R}. Hence, the Riemann integrals


\int_{g(a)}^{g(b)} f(x)\,dx

and


\int_a^b f(g(t))g'(t)\,dt

in fact exist, and it remains to show that they are equal.

Since f is continuous, it possesses an antiderivative F : I \to \mathbb{R}. The composite function F \circ g : [a,b] \to \mathbb{R} is then defined. Since F and g are differentiable, we moreover have


(F \circ g)'(t) = F'(g(t))g'(t) = f(g(t))g'(t)

for all t \in [a,b] by the chain rule. Applying the fundamental theorem of calculus twice, we obtain


\begin{align}
\int_a^b f(g(t))g'(t)\,dt & {} = (F \circ g)(b) - (F \circ g)(a) \\
& {} = F(g(b)) - F(g(a)) \\
& {} = \int_{g(a)}^{g(b)} f(x)\,dx,
\end{align}

as desired.

[edit] Examples

Consider the integral


\int_{0}^2 x \cos(x^2+1) \,dx

By using the substitution u = x2 + 1, we obtain du = 2x dx and


\begin{align}
\int_{0}^2 x \cos(x^2+1) \,dx & {} = \frac{1}{2} \int_{0}^2 \cos(x^2+1) 2x \,dx \\
& {} = \frac{1}{2} \int_{1}^5\cos(u)\,du \\
& {} = \frac{1}{2}(\sin(5)-\sin(1)).
\end{align}

Here we used the substitution rule from right to left. Note how the lower limit x = 0 was transformed into u = 02 + 1 = 1 and the upper limit x = 2 into u = 22 + 1 = 5.

For the integral


\int_0^1 \sqrt{1-x^2}\; dx

the formula needs to be used from left to right: the substitution x = sin(u), dx = cos(udu is useful, because √(1-sin2(u)) = cos(u):


\int_0^1 \sqrt{1-x^2}\; dx = \int_0^\frac{\pi}{2} \sqrt{1-\sin^2(u)} \cos(u)\;du = \int_0^\frac{\pi}{2} \cos^2(u)\;du

The resulting integral can be computed using integration by parts or a double angle formula followed by one more substitution.

[edit] Antiderivatives

The substitution rule can be used to determine antiderivatives. One chooses a relation between x and u, determines the corresponding relation between dx and du by differentiating, and performs the substitutions. An antiderivative for the substituted function can hopefully be determined; the original substitution between x and u is then undone.

Similar to our first example above, we can determine the following antiderivative with this method:


\begin{align}
& {} \quad \int u \cos(u^2+1) \,du = \frac{1}{2} \int \cos(u^2+1) 2u \,du \\
& {} = \frac{1}{2} \int\cos(x)\,dx = \frac{1}{2}\sin(x) + C = \frac{1}{2}\sin(u^2+1) + C
\end{align}

where C is an arbitrary constant of integration.

Note that there were no integral boundaries to transform, but in the last step we had to revert the original substitution x = u2 + 1.

[edit] Substitution rule for multiple variables

One may also use substitution when integrating functions of several variables. Here the substitution function (v1,...,vn) = φ(u1, ..., un ) needs to be one-to-one and continuously differentiable, and the differentials transform as

dv_1\cdots dv_n = |\det(\operatorname{D}\phi)(u_1, \ldots, u_n)| \, du_1\cdots du_n

where det(Dφ)(u1, ..., un ) denotes the determinant of the Jacobian matrix containing the partial derivatives of φ . This formula expresses the fact that the absolute value of the determinant of given vectors equals the volume of the spanned parallelepiped.

More precisely, the change of variables formula is stated in the following theorem:

Theorem. Let U, V  be open sets in Rn and φ : UV  an injective differentiable function with continuous partial derivatives, the Jacobian of which is nonzero for every x in U. Then for any real-valued, compactly supported, continuous function f, with support connected in φ(U),

 \int_{\varphi(U)} f(\mathbf{v})\, d \mathbf{v} = \int_U f(\varphi(\mathbf{u})) \left|\det(\operatorname{D}\varphi)(\mathbf{u})\right| \,d \mathbf{u}.

[edit] Application in probability

The substitution rule can be used to answer the following important question in probability: given a random variable X with probability density px and another random variable Y related to X by the equation y = Φ(x), what is the probability density for Y?

It is easiest to answer this question by first answering a slightly different question: what is the probability that Y takes a value in some particular subset S? Denote this probability P(Y \in S). Of course, if Y has probability density py then the answer is

P(Y \in S) = \int_S p_y(y)\,dy,

but this isn't really useful because we don't know py; it's what we're trying to find in the first place. We can make progress by considering the problem in the variable X. Y takes a value in S whenever X takes a value in Φ − 1(S), so

 P(Y \in S) = \int_{\Phi^{-1}(S)} p_x(x)\,dx.

Changing from variable x to y gives


P(Y \in S) = \int_{\Phi^{-1}(S)} p_x(x)~dx = \int_S p_x(\Phi^{-1}(y)) ~ \left|\frac{d\Phi^{-1}}{dy}\right|~dy.

Combining this with our first equation gives


\int_S p_y(y)~dy = \int_S p_x(\Phi^{-1}(y)) ~ \left|\frac{d\Phi^{-1}}{dy}\right|~dy

so


p_y(y) = p_x(\Phi^{-1}(y)) ~ \left|\frac{d\Phi^{-1}}{dy}\right|.

In the case where X and Y depend on several uncorrelated variables, ie. p_x=p_x(x_1\ldots x_n), and y = Φ(x), py can be found by use of the substitution rule in several variables discussed above. The result is


p_y(y) = p_x(\Phi^{-1}(y)) ~ \left|\det \left[ D\Phi ^{-1}(y) \right] \right|.

[edit] See also

Substitution of variables

Wikibooks
Wikibooks Calculus has a page on the topic of