Newton polynomial

From Wikipedia, the free encyclopedia

In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is the interpolation polynomial for a given set of data points in the Newton form. The Newton polynomial is sometimes called Newton's divided differences interpolation polynomial because the coefficients of the polynomial are calculated using divided differences.

As there is only one interpolation polynomial for a given set of data points it is a bit misleading to call the polynomial Newton interpolation polynomial. The more precise name is interpolation polynomial in the Newton form.

Contents

[edit] Definition

Given a set of k + 1 data points

(x_0, y_0),\ldots,(x_k, y_k)

where no two xj are the same, the interpolation polynomial in the Newton form is a linear combination of Newton basis polynomials

N(x) := \sum_{j=0}^{k} a_{j} n_{j}(x)

with the Newton basis polynomials defined as

n_j(x) := \prod_{i=0}^{j-1} (x - x_i)

and the coefficients defined as

a_j := [y_0,\ldots,y_j]

where

[y_0,\ldots,y_j]

is the notation for divided differences.

Thus the Newton polynomial can be written as

N(x) := [y_0] + [y_0,y_1](x-x_0) + \cdots + [y_0,\ldots,y_k](x-x_0)\cdots(x-x_{k-1}).

[edit] General case

For the special case of xi = i, there is a closely related set of polynomials, also called the Newton polynomials, that are simply the binomial coefficients for general argument. That is, one also has the Newton polynomials pn(z) given by

p_n(z)={z \choose n}= \frac{z(z-1)\cdots(z-n+1)}{n!}

In this form, the Newton polynomials generate the Newton series. These are in turn a special case of the general difference polynomials which allow the representation of analytic functions through generalized difference equations.

[edit] Main idea

Solving an interpolation problem leads to a problem in linear algebra where we have to solve a matrix. Using a standard monomial basis for our interpolation polynomial we get the very complicated Vandermonde matrix. By choosing another basis, the Newton basis, we get a much simpler lower triangular matrix which can be solved faster.

For k + 1 data points we construct the Newton basis as

n_j(x) := \prod_{i=0}^{j-1} (x - x_i) \qquad j=0,\ldots,k.

Using these polynomials as a basis for Πk we have to solve

\begin{bmatrix}       1 &         &        &        & 0  \\       1 & x_1-x_0 &        &        &    \\       1 & x_2-x_0 & (x_2-x_0)(x_2-x_1) &        &    \\  \vdots & \vdots  &        & \ddots &    \\       1 & x_k-x_0 & \ldots & \ldots & \prod_{j=0}^{k-1}(x_k - x_j) \end{bmatrix} \begin{bmatrix}      a_0 \\      \vdots \\      a_{k}  \end{bmatrix} = \begin{bmatrix}      y_0 \\      \vdots \\      y_{k} \end{bmatrix}

to solve the polynomial interpolation problem.

This matrix can be solved recursively by solving

\sum_{i=0}^{j} a_{i} n_{i}(x_j) = y_j \qquad j = 0,\dots,k.

[edit] Application

As can be seen from the definition of the divided differences new data points can be added to the data set to create a new interpolation polynomial without recalculating the old coefficients. And when a data point changes we usually do not have to recalculate all coefficients. Furthermore if the xi are distributed equidistantly the calculation of the divided differences becomes significantly easier. Therefore the Newton form of the interpolation polynomial is usually preferred over the Lagrange form for practical purposes.


[edit] Example

The divided differences can be written in the form of a table. For example, for a function f is to be interpolated on points x_0, \ldots, x_n. Write

\begin{matrix}    x_0 & f(x_0) &                                 & \\        &        & {f(x_1)-f(x_0)\over x_1 - x_0}  & \\    x_1 & f(x_1) &                                 & {{f(x_2)-f(x_1)\over x_2 - x_1}-{f(x_1)-f(x_0)\over x_1 - x_0} \over x_2 - x_0} \\        &        & {f(x_2)-f(x_1)\over x_2 - x_1}  & \\    x_2 & f(x_2) &                                 & \vdots \\        &        & \vdots                          & \\ \vdots &        &                                 & \vdots \\        &        & \vdots                          & \\    x_n & f(x_n) &                                 & \\ \end{matrix}

the interpolating polynomial is formed as above using the topmost entries in each column as coefficients.

For example, if we are to construct the interpolating polynomial to f(x) = tanx using divided differences, at the points

x0 = − 1.5 x1 = − 0.75 x2 = 0 x3 = 0.75 x4 = 1.5
f(x0) = − 14.1014 f(x1) = − 0.931596 f(x2) = 0 f(x3) = 0.931596 f(x4) = 14.1014

we construct the table

\begin{matrix} -1.5  & -14.1014  &         &          &          &\\       &           & 17.5597 &          &          &\\ -0.75 & -0.931596 &         & -10.8784 &          &\\       &           & 1.24213 &          & 4.83484  &  \\ 0     & 0         &         & 0        &          & 0\\       &           & 1.24213 &          & 4.83484  &\\ 0.75  & 0.931596  &         & 10.8784  &          &\\       &           & 17.5597 &          &          &\\ 1.5   & 14.1014   &         &          &          &\\ \end{matrix}

Thus, the interpolating polynomial is

\ -14.1014+17.5597(x+1.5)-10.8784(x+1.5)(x+0.75)
\ +4.83484(x+1.5)(x+0.75)(x)+0(x+1.5)(x+0.75)(x)(x-0.75)
\ =-0.00005-1.4775x-0.00001x^2+4.83484x^3

Neglecting minor numerical stability problems in evaluating the entries of the table, this polynomial is essentially the same as that obtained with the same function and data points in the Lagrange polynomial article.

[edit] See also

In other languages