Finite difference

From Wikipedia, the free encyclopedia

A finite difference is a mathematical expression of the form f(x + b) − f(x +a). If a finite difference is divided by ba, one gets an expression similar to a differential quotient, except that it uses finite quantities instead of infinitesimal ones. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of partial differential equations.

In mathematical analysis, the finite difference is viewed as an operator, and one studies its properties.

Contents

[edit] Forward, backward and central differences

Only three forms are commonly considered: forward, backward and central differences.

A forward difference is an expression of the form

\Delta[f](x) =  f(x + h) - f(x). \,


Depending on the application, the spacing h is held constant, or the limit h → 0 is taken.

A backward difference arises when h is replaced by −h:

\nabla[f](x) =  f(x) - f(x-h).


Finally, the central difference is given by

\delta[f](x) =  {f(x+h/2)-f(x-h/2)}. \

[edit] Relation with derivatives

The derivative of a function f at a point x is defined by the limit

f'(x) = \lim_{h\to0} \frac{f(x+h) - f(x)}{h}.

If h has a fixed (non-zero) value, instead of approaching zero, then the right-hand side is

\frac{f(x + h) - f(x)}{h} = \frac{\Delta[f](x)}{h}.

Hence, the forward difference divided by h approximates the derivative when h is small. The error in this approximation can be derived from Taylor's theorem. Assuming that f is continuously differentiable, the error is

\frac{\Delta[f](x)}{h} - f'(x) = O(h) \quad (h \to 0).

The same formula holds for the backward difference:

\frac{\nabla[f](x)}{h} - f'(x) = O(h).

However, the central difference yields a more accurate approximation. Its error is proportional to square of the spacing (if f is twice continuously differentiable):

\frac{\delta[f](x)}{h} - f'(x) =  O(h^{2}) . \!

[edit] Calculus of finite differences

The forward difference can be considered as a difference operator, which maps the function f to Δf. Taylor's theorem can be expressed by the formula

\Delta = hD + \frac12 h^2D^2 + \frac1{3!} h^3D^3 + \cdots = \mathrm{e}^{hD} - 1,

where D denotes the derivative operator, mapping f to its derivative f'. Formally inverting the exponential suggests that

hD = \log(1+\Delta) = \Delta - \frac12 \Delta^2 + \frac13 \Delta^3 + \cdots. \,

This formula holds in the sense that both operators give the same result when applied to a polynomial. Even for analytic functions, the series on the right is not guaranteed to converge; it may be an asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields

f'(x) \approx \frac{\Delta[f](x) - \frac12 \Delta^2[f](x)}{h} = - \frac{f(x+2h)-4f(x+h)+3f(x)}{2h}.

The error in this approximation is of the order h2.

The analogous formulas for the backward and central difference operators are

hD = -\log(1-\nabla) \quad\mbox{and}\quad hD = \, \operatorname{arcsinh} \left( \delta \right).

[edit] Higher order derivatives

In an analogous way one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula with step h / 2 for f'(x + h / 2) and f'(xh / 2) and then applying a central difference formula for the derivative of f' at x, we obtain the central difference approximation of the second derivative of f:

f''(x) \approx \frac{\delta^2[f](x)}{h^2} =  \frac{f(x+h) - 2 f(x) + f(x-h)}{h^{2}} .

[edit] Finite difference methods

Another important aspect is that finite differences approach differential quotients as h goes to zero. Thus, we can use finite differences to approximate derivatives. This is often used in numerical analysis, especially in numerical ordinary differential equations and numerical partial differential equations, which aim at the numerical solution of ordinary and partial differential equations respectively. The resulting methods are called finite difference methods.

Common applications of the finite difference method are in computational science and engineering disciplines, such as thermal engineering, fluid mechanics, etc.

[edit] Finite difference as an operator

In mathematical analysis finite difference of first order is defined as

\Delta_h(f,x) = \Delta^{1}_h(f,x) := f(x+h)-f(x).

Note that \Delta^{}_h(f,x) still can be regarded as a function of x, so \Delta^{}_h is an operator, acting on function f(x). Sometimes it is also defined as

Δh = ThI.

Here Th is a shift operator with step h defined as T_h:f(x)\rightarrow f(x+h) and I is an identity operator.

Finite difference of higher orders can be defined in recursive manner as

\Delta^n_h(f,x):=\Delta_h(\Delta^{n-1}_h(f,x), x)

or, in operators notation,

\Delta^n_h:=\Delta_h(\Delta^{n-1}_h).

For example by this definition

\Delta^2_h(f,x) = \Delta_h(f,x+h) - \Delta_h(f,x) =  (f(x+2h)-f(x+h)) - (f(x+h)-f(x)) = f(x+2h) - 2f(x+h) + f(x).

Here series of coefficients (+1,—2,+1) is a third line in Pascal's triangle with alternated signs. It can be shown that this rule is valid for higher orders as well. In general,

\Delta^n_h(f,x):=\sum\limits_{k=0}^n (-1)^{n-k} \binom{n}{k} f(x+kh).

Another possible (and equivalent) defintion is

\Delta^n_h = [T_h-I]^r.

[edit] Relation with derivatives

For smooth enough function f(x)

\lim\limits_{h \rightarrow 0 } \frac{\Delta^n_h(f,x)}{h^n} = f^{(n)}(x) = D^n(f).

Note that Dn is an operator as well—it is differential operator.

Also for smooth enough function f(x) we can state that there is such point \theta \in [x, x+nh] that

f^{(n)}(\theta) = \frac{\Delta^n_h(f,x)}{h^n}.

For further relations see also divided differences.

[edit] Properties

\Delta^{n}_h (\alpha f+\beta g, x) = \alpha \Delta^{n}_h (f, x) + \beta \Delta^{n}_h (g, x).
  • For all j and k:
\Delta^{j+k}_h (f, x) = \Delta_h^j[\Delta_h^k(f,x)].
\Delta^n_h (fg, x) = \sum\limits_{k=0}^n \binom{n}{k} \Delta^k_h (f, x) \Delta^{n-k}_h(g, x+kh).
  • For all positive k and n
\Delta^n_{kh} (f, x) = \sum\limits_{i_1=0}^{k-1} \sum\limits_{i_2=0}^{k-1} ... \sum\limits_{i_n=0}^{k-1} \Delta^n_h (f, x+i_1h+i_2h+...+i_nh).
  • If f(n) is continous
\Delta^n_{h} (f, x) = \int\limits_{0}^{h} du_1 \int\limits_{0}^{h} du_2\, ... \int\limits_{0}^{h} f^{(n)}(x+u_1+u_2+...+u_n) du_n.
  • Finite difference of first order reduces degree of any algebraic polynomial by one. It means that for any

polynomial p(x) of degree less than n \Delta^n_h (p, x) \equiv 0.

[edit] Generalizations

A generalized finite difference operator is usually defined as

\Delta_h^\mu(f,x):=\sum\limits_{k=0}^N c_k f(x+kh),

where μ = {ck} is its coefficients vector. Sometimes word generalized is omitted and this operators are called simply finite difference operators. So in that case finite difference is a special case of finite difference operator.

To further generalize finite differences, we can replace finite sum above with infinite series and get infinite difference operator.

There is also notion of difference operator which sometimes is used as a shortening for some of aforementioned classes of operators.

[edit] Usage

This notion of finite difference is used in definition of modulus of continuity.

Another usage is in the theory of functional difference equations.

[edit] See also

[edit] References

  • William F. Ames, Numerical Method for Partial Differential Equations, Section 1.6. Academic Press, New York, 1977. ISBN 0-12-056760-1.
  • Francis B. Hildebrand, Finite-Difference Equations and Simulations, Section 2.2. Prentice-Hall, Englewood Cliffs, New Jersey, 1968.
  • Boole, George, A Treatise On The Calculus of Finite Differences, 2nd Ed., Macmillan and Company, 1872. [See also: Dover edition 1960].
  • Freeman, Harry, Finite Differences for Actuarial Students. 1967.