Linear differential equation

From Wikipedia, the free encyclopedia

In mathematics, a linear differential equation is a differential equation of the form

Ly = f,

where the differential operator L is a linear operator, y is the unknown function, and the right hand side f is a given function. The linearity condition on L rules out operations such as taking the square of the derivative of y; but permits, for example, taking the second derivative of y. Therefore a fairly general form of such an equation would be

D^n y(x) + a_{n-1}(x)D^{n-1} y(x) + \cdots + a_1(x) D y(x) + a_0(x) y(x) =f(x)

where D is the differential operator d/dx (i.e. Dy = y' , D²y = y",... ), and the ai are given functions. Such an equation is said to have order n, the index of the highest derivative of f that is involved. (Assuming a possibly existing coefficient an of this derivative to be non zero, it is eliminated by dividing through it. In case it can become zero, different cases must be considered separately for the analysis of the equation.)

If y is assumed to be a function of only one variable, one speaks about an ordinary differential equation, else the derivatives and their coefficients must be understood as (contracted) vectors, matrices or tensors of higher rank, and we have a (linear) partial differential equation.

The case where f = 0 is called a homogeneous equation, and is particularly important to the solution of the general case (by a method traditionally called particular integral and complementary function). When the ai are numbers, the equation is said to have constant coefficients.

Contents

[edit] Homogeneous linear differential equation with constant coefficients

To solve such an equation one makes a substitution

y = e^{\lambda x} \!\,

to form the characteristic equation

{\lambda^n +a_{n-1}\lambda^{n-1}+\cdots+a_1\lambda+a_0 = 0}

to obtain the solutions

\lambda=s_0, s_1, \dots, s_{n-1}.

When this polynomial has distinct roots, we have immediately n solutions to the differential equation in the form

{y_i(x)=e^{s_i x}.}

It can be shown that these are linearly independent, by applying the Vandermonde determinant. Since homogenous linear DEs obey the superposition principle, their linear combinations, with n coefficients, should provide a complete solution. So it proves: it is known that the general solution to the homogeneous equation can be formed from a linear combination of the yi, ie.,

{y_H(x)=A_0 y_0(x)+A_1 y_1+\cdots+A_{n-1} y_{n-1}}

Where the solutions are not distinct, it may be necessary to multiply them by some power of x to obtain linear independence; the general solution therefore involves the product of polynomials, of degrees bounded in terms of the multiplicities of the roots, and exponentials.

[edit] Non-homogeneous linear differential equation with constant coefficients

To obtain the solution to the non-homogeneous equation (sometimes called inhomogeneous equation), find a particular solution yP(x) by either the method of undetermined coefficients or the method of variation of parameters; the general solution to the linear differential equation is the sum of the general solution of the related homogeneous equation and the particular solution.

[edit] Linear differential equations with variable coefficients

An example of a linear differential equation with variable coefficients is

Dy(x) + f(x)y(x) = g(x) (This is a first-order partial differential equation)

Equations of this form can be solved by forming the integrating factor

e^{\int f(x)\,dx},

multiplying throughout to obtain

Dy(x)e^{\int f(x)\,dx}+f(x)y(x)e^{\int f(x)\,dx}=g(x)e^{\int f(x)\,dx}

which simplifies due to the product rule to

D (y(x)e^{\int f(x)\,dx})=g(x)e^{\int f(x)\,dx}

which, on integrating both sides, yields

y(x)e^{\int f(x)\,dx}=\int g(x)e^{\int f(x)\,dx} \,dx+c ~,
y(x) = {\int g(x)e^{\int f(x)\,dx} \,dx+c \over e^{\int f(x)\,dx}} ~.

[edit] Example (I)

Given the first order differential equation

(Dk)y = 0

which is equivalent to

Dy = ky,

divide both sides by y,

{D y \over y} = k,
Dlny = k.

Integrate both sides

\ln y = \int k = k x + B,

then exponentiate both sides to obtain

y = eBekx = Aekx,

which is the general solution.

[edit] Example (II)

The second order differential equation

D2y = − k2y,

which represents a simple harmonic oscillator, can be restated as

(D2 + k2)y = 0.

The expression in parenthesis can be factored out, yielding

(D + ik)(Dik)y = 0,

which has a pair of linearly independent solutions, one for

(Dik)y = 0

and another for

(D + ik)y = 0.

The solutions are, respectively,

y0 = A0eikx

and

y1 = A1e ikx.

These solutions provide a basis for the two-dimensional "solution space" of the second order differential equation: meaning that linear combinations of these solutions will also be solutions. In particular, the following solutions can be constructed

y_{0'} = {A_0 e^{i k x} + A_0 e^{-i k x} \over 2} = A_0 \cos (k x)

and

y_{1'} = {A_1 e^{i k x} - A_1 e^{-i k x} \over 2 i} = A_1 \sin (k x).

These last two trigonometric solutions are linearly independent, so they can serve as another basis for the solution space, yielding the following general solution:

yH = A0cos(kx) + A1sin(kx).

[edit] Example (III)

Given the equation for the damped harmonic oscillator:

\left(D^2 + {b \over m} D + \omega_0^2\right)  y =  0,

the expression in parentheses can be factored out: first obtain the characteristic equation by replacing D with λ. This equation must be satisfied for all y, thus:

\lambda^2 + {b \over m} \lambda + \omega_0^2 = 0.

Solve using the quadratic formula:

\lambda = {-b/m \pm \sqrt{b^2 / m^2 - 4 \omega_0^2} \over 2}.

Use these data to factor out the original differential equation:

\left(D + {b \over 2 m} - \sqrt{{b^2 \over 4 m^2} - \omega_0^2} \right) \left(D + {b \over 2m} + \sqrt{{b^2 \over 4 m^2} - \omega_0^2}\right) y = 0.

This implies a pair of solutions, one corresponding to

\left(D + {b \over 2 m} - \sqrt{{b^2 \over 4 m^2} - \omega_0^2} \right) y = 0

and another to

\left(D + {b \over 2m} + \sqrt{{b^2 \over 4 m^2} - \omega_0^2}\right) y = 0

The solutions are, respectively,

y_0 = A_0 e^{-\omega x + \sqrt{\omega^2 - \omega_0^2} x} = A_0 e^{-\omega x} e^{\sqrt{\omega^2 - \omega_0^2} x}

and

y_1 = A_1 e^{-\omega x - \sqrt{\omega^2 - \omega_0^2} x} = A_1 e^{-\omega x} e^{-\sqrt{\omega^2 - \omega_0^2} x}

where ω = b / 2m. From this linearly independent pair of solutions can be constructed another linearly independent pair which thus serve as a basis for the two-dimensional solution space:

y_H (A_0, A_1) (x) = \left(A_0 \sinh \sqrt{\omega^2 - \omega_0^2} x + A_1 \cosh \sqrt{\omega^2 - \omega_0^2} x\right) e^{-\omega x}.

However, if |ω| < |ω0| then it is preferable to get rid of the consequential imaginaries, expressing the general solution as

y_H (A_0, A_1) (x) = \left(A_0 \sin \sqrt{\omega_0^2 - \omega^2} x + A_1 \cos \sqrt{\omega_0^2 - \omega^2} x\right) e^{-\omega x}.

This latter solution corresponds to the underdamped case, whereas the former one corresponds to the overdamped case: the solutions for the underdamped case oscillate whereas the solutions for the overdamped case do not.

[edit] Generalization

From these examples it is not hard to induce the general case: a homogeneous linear differential equation with constant coefficients can be represented as

P(D) \, y(x) = 0

where P(D) is a polynomial function of the operator D, and y is the function of x to be solved for. Then the polynomial function P(D) can always be factored out as a product of linear factors:

P(D) = Πi(D − λi).

This can in the general case be effected by finding the roots λi of the equation

P(λ) = 0.

It is evident that each factor (D − λi) of P(D) will contribute a solution

y_i = A_i e^{\lambda_i x}.

Moreover, since the degree of the characteristic equation will be the same as the order of its associated linear differential equation, and since the fundamental theorem of algebra guarantees a number of solutions (not necessarily real) for the characteristic equation equal to its degree, then generally (except for degeneracies) a linear differential equation of order n will be found to have n linearly independent solutions which then serve as a basis for the n-dimensional "solution space" which is a vector space with pointwise addition of solutions.

[edit] See also