Finite difference

A finite difference is a mathematical expression of the form f(x + b) − f(x + a). If a finite difference is divided by ba, one gets a difference quotient. The approximation of derivatives by finite differences plays a central role in finite difference methods for the numerical solution of differential equations, especially boundary value problems.

In mathematical analysis, operators involving finite differences are studied. A difference operator is an operator which maps a function f to a function whose values are the corresponding finite differences.

Contents

Forward, backward, and central differences

Only three forms are commonly considered: forward, backward, and central differences.

A forward difference is an expression of the form

 \Delta_h[f](x) =  f(x + h) - f(x). \

Depending on the application, the spacing h may be variable or held constant.

A backward difference uses the function values at x and xh, instead of the values at x + h and x:

 \nabla_h[f](x) =  f(x) - f(x-h). \

Finally, the central difference is given by

 \delta_h[f](x) =  f(x+\tfrac12h)-f(x-\tfrac12h). \

Relation with derivatives

The derivative of a function f at a point x is defined by the limit

 f'(x) = \lim_{h\to0} \frac{f(x+h) - f(x)}{h}.

If h has a fixed (non-zero) value, instead of approaching zero, then the right-hand side is

 \frac{f(x + h) - f(x)}{h} = \frac{\Delta_h[f](x)}{h}.

Hence, the forward difference divided by h approximates the derivative when h is small. The error in this approximation can be derived from Taylor's theorem. Assuming that f is continuously differentiable, the error is

 \frac{\Delta_h[f](x)}{h} - f'(x) = O(h) \quad (h \to 0).

The same formula holds for the backward difference:

 \frac{\nabla_h[f](x)}{h} - f'(x) = O(h).

However, the central difference yields a more accurate approximation. Its error is proportional to square of the spacing (if f is twice continuously differentiable):

 \frac{\delta_h[f](x)}{h} - f'(x) =  O(h^{2}) . \!

Higher-order differences

In an analogous way one can obtain finite difference approximations to higher order derivatives and differential operators. For example, by using the above central difference formula for f'(x+h/2) and f'(x-h/2) and applying a central difference formula for the derivative of  f' at x, we obtain the central difference approximation of the second derivative of f:

 f''(x) \approx \frac{\delta_h^2[f](x)}{h^2} =  \frac{f(x+h) - 2 f(x) + f(x-h)}{h^{2}} .

More generally, the nth-order forward, backward, and central differences are respectively given by:

\Delta^n_h[f](x) = 
\sum_{i = 0}^{n} (-1)^i \binom{n}{i} f(x + (n - i) h),
\nabla^n_h[f](x) = 
\sum_{i = 0}^{n} (-1)^i \binom{n}{i} f(x - ih),
\delta^n_h[f](x) = 
\sum_{i = 0}^{n} (-1)^i \binom{n}{i} f\left(x + \left(\frac{n}{2} - i\right) h\right).

Note that the central difference will, for odd n, have h multiplied by non-integers. If this is a problem (usually it is), it may be remedied taking the average of \delta^n[f](x - h/2) and \delta^n[f](x + h/2).

The relationship of these higher-order differences with the respective derivatives is very straightforward:

\frac{d^n f}{d x^n}(x) = \frac{\Delta_h^n[f](x)}{h^n}+O(h) = \frac{\nabla_h^n[f](x)}{h^n}+O(h) = \frac{\delta_h^n[f](x)}{h^n} + O(h^2).

Higher-order differences can also be used to construct better approximations. As mentioned above, the first-order difference approximates the first-order derivative up to a term of order h. However, the combination

 \frac{\Delta_h[f](x) - \frac12 \Delta_h^2[f](x)}{h} = - \frac{f(x+2h)-4f(x+h)+3f(x)}{2h}

approximates f'(x) up to a term of order h2. This can be proven by expanding the above expression in Taylor series, or by using the calculus of finite differences, explained below.

If necessary, the finite difference can be centered about any point by mixing forward, backward, and central differences.

Properties

\Delta^n_{kh} (f, x) = \sum\limits_{i_1=0}^{k-1} \sum\limits_{i_2=0}^{k-1} ... \sum\limits_{i_n=0}^{k-1} \Delta^n_h (f, x+i_1h+i_2h+...+i_nh).
\Delta^n_h (fg, x) = \sum\limits_{k=0}^n \binom{n}{k} \Delta^k_h (f, x) \Delta^{n-k}_h(g, x+kh).

Finite difference methods

Main article: finite difference method

An important application of finite differences is in numerical analysis, especially in numerical differential equations, which aim at the numerical solution of ordinary and partial differential equations respectively. The idea is to replace the derivatives appearing in the differential equation by finite differences that approximate them. The resulting methods are called finite difference methods.

Common applications of the finite difference method are in computational science and engineering disciplines, such as thermal engineering, fluid mechanics, etc.

Calculus of finite differences

Main article: difference operator

The forward difference can be considered as a difference operator, which maps the function f to Δh[f]. This operator satisfies

\Delta_h = T_h-I, \,

where T_h is the shift operator with step h, defined by T_h[f](x) = f(x+h), and I is an identity operator.

Finite difference of higher orders can be defined in recursive manner as \Delta^n_h(f,x):=\Delta_h(\Delta^{n-1}_h(f,x), x) or, in operators notation, \Delta^n_h:=\Delta_h(\Delta^{n-1}_h). Another possible (and equivalent) definition is \Delta^n_h = [T_h-I]^n.

The difference operator Δh is linear and satisfies Leibniz rule. Similar statements hold for the backward and central difference.

Taylor's theorem can now be expressed by the formula

 \Delta_h = hD + \frac12 h^2D^2 + \frac1{3!} h^3D^3 + \cdots = \mathrm{e}^{hD} - 1,

where D denotes the derivative operator, mapping f to its derivative f'. Formally inverting the exponential suggests that

 hD = \log(1+\Delta_h) = \Delta_h - \frac12 \Delta_h^2 + \frac13 \Delta_h^3 + \cdots. \,

This formula holds in the sense that both operators give the same result when applied to a polynomial. Even for analytic functions, the series on the right is not guaranteed to converge; it may be an asymptotic series. However, it can be used to obtain more accurate approximations for the derivative. For instance, retaining the first two terms of the series yields the second-order approximation to f'(x) mentioned at the end of the section Higher-order differences.

The analogous formulas for the backward and central difference operators are

 hD = -\log(1-\nabla_h) \quad\mbox{and}\quad hD = 2 \, \operatorname{arcsinh}(\tfrac12\delta_h).

The calculus of finite differences is related to the umbral calculus in combinatorics.

Generalizations

A generalized finite difference is usually defined as

\Delta_h^\mu[f](x) = \sum_{k=0}^N \mu_k f(x+kh),

where \mu = (\mu_0,\ldots,\mu_N) is its coefficients vector. An infinite difference is a further generalization, where the finite sum above is replaced by an infinite series. Another way of generalization is making coefficients \mu_k depend on point x : \mu_k=\mu_k(x), thus considering weighted finite difference. Also one may make step h depend on point x : h=h(x). Such generalizations are useful for constructing different modulus of continuity.

Finite difference in several variables

Finite differences can be considered in more than one variable. They are analogous to partial derivatives in several variables.

See also

References

External links