Polyharmonic spline

Polyharmonic splines are used for function approximation and data interpolation. They are very useful for interpolating and fitting scattered data in many dimensions. A special case is the thin plate spline.[1][2]

Definition

A polyharmonic spline is a linear combination of polyharmonic radial basis functions (RBFs) (denoted by \phi) plus an optional linear term:


f(\mathbf{x})  =  
\sum_{i=1}^N w_i  \phi(|\mathbf{x} -\mathbf{c}_i|) + 
                  \mathbf{v}^{\textrm{T}} 
 \begin{bmatrix} 1 \\ \mathbf{x} \end{bmatrix}

 

 

 

 

(1)

where

Polyharmonic basis functions

The polyharmonic RBFs are of the form:

 
\begin{matrix}
  \phi(r) = \begin{cases}
                r^k & \mbox{with } k=1,3,5,\dots, \\
                r^k \ln(r) & \mbox{with } k=2,4,6,\dots
            \end{cases} \\[5mm]
   r = |\mathbf{x} - \mathbf{c}_i| 
    = \sqrt{ (\mathbf{x} - \mathbf{c}_i)^T \, (\mathbf{x} - \mathbf{c}_i) }.
 \end{matrix}

Other values of the exponent k are not useful (such as  \phi(r) = r^2 ), because a solution of the interpolation problem might not exist. To avoid problems at r=0 (since \log(0) = -\infty), the polyharmonic RBFs with the natural logarithm might be implemented as:


\phi(r) = \begin{cases}
             r^{k-1} \ln(r^r) & \mbox{for } r < 1 \\
             r^k \ln(r)       & \mbox{for } r \ge 1
          \end{cases}.

The weights w_i and v_j are determined such that the function interpolates N given points (\mathbf{c}_i, f_i) (for i=1,2,\dots,N) and fulfills the d+1 orthogonality conditions:

 
  \sum_{i=1}^N w_i=0, \;\; \sum_{i=1}^N w_i \mathbf{c}_i=\mathbf{0}.

All together, these constraints are equivalent to the following symmetric linear system of equations:

 
\begin{bmatrix}
  A & B \\
  B^{\textrm{T}} & 0 \end{bmatrix}
\;
\begin{bmatrix}
  \mathbf{w} \\
  \mathbf{v}
\end{bmatrix} \; = \; 
\begin{bmatrix}
  \mathbf{f} \\
  \mathbf{0}
\end{bmatrix}\;\;\;\;

 

 

 

 

(2)

where

 
  A_{i,j} =  \phi(|\mathbf{c}_i - \mathbf{c}_j|), \quad
  B =  
  \begin{bmatrix}
         1       &       1      & \cdots & 1 \\
    \mathbf{c}_1 & \mathbf{c}_2 & \cdots & \mathbf{c}_{N}
  \end{bmatrix}^{\textrm{T}}, \quad
  \mathbf{f}  = [f_1, f_2, \cdots, f_N]^{\textrm{T}}

Under very mild conditions (essentially, that at least  
  d+1
points are not in a subspace; e.g. for  
  d=2
that at least 3 points are not on a straight line), the matrix of the linear system of equations is nonsingular and therefore a unique solution of the equation system exists. The computed weights allow evaluation of the spline for any \mathbf{x}\in\mathbb{R}^d using equation (1).

Many practical details to implement and use polyharmonic splines are given in Fasshauer.[3] In Iske[4] polyharmonic splines are treated as special cases of other multiresolution methods in scattered data modelling.

Reason for the name "polyharmonic"

A polyharmonic equation is a partial differential equation of the form \Delta^m f = 0 for any natural number m. For example, the biharmonic equation is \Delta^2 f = 0 and the triharmonic equation is \Delta^3 f = 0. All the polyharmonic radial basis functions are solutions of a polyharmonic equation (or more accurately, a modified polyharmonic equation with a Dirac delta function on the right hand side instead of 0). For example, the thin plate radial basis function is a solution of the modified 2-dimensional biharmonic equation. [5] Applying the 2D Laplace operator (\Delta = \partial_{xx} + \partial_{yy}) to the thin plate radial basis function f_{\text{tp}}(x,y) = (x^2+y^2) \log \sqrt{x^2+y^2} either by hand or using a computer algebra system shows that \Delta f_{\text{tp}} = 4 + 4\log r. Applying the Laplace operator to \Delta f_{\text{tp}} (this is \Delta^2 f_{\text{tp}}) yields 0. But 0 is not exactly correct. To see this, replace r^2=x^2+y^2 with \rho^2 = x^2+y^2+h^2 (where h is some small number tending to 0). The Laplace operator applied to 4 \log \rho yields \Delta^2 f_{\text{tp}} = 8h^2 / \rho^4. For (x,y)=(0,0), the right hand side of this equation approaches infinity as h approaches 0. For any other (x,y), the right hand side approaches 0 as h approaches 0. This indicates that the right hand side is a Dirac delta function. A computer algebra system will show that

\lim_{h \to 0}\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} 8h^2/(x^2+y^2+h^2)^2 \operatorname{d}\!x \operatorname{d}\!y = 8\pi.

So the thin plate radial basis function is a solution of the equation \Delta^2 f_{\text{tp}} = 8\pi\delta(x,y).

Applying the 3D Laplacian (\Delta = \partial_{xx} + \partial_{yy} + \partial_{zz}) to the biharmonic RBF f_{\text{bi}}(x,y,z)=\sqrt{x^2+y^2+z^2} yields \Delta f_{\text{bi}} = 2/r and applying the 3D \Delta^2 operator to the triharmonic RBF f_{\text{tri}}(x,y,z) = (x^2+y^2+z^2)^{3/2} yields \Delta^2 f_{\text{tri}} = 24/r. Letting \rho^2 = x^2+y^2+z^2+h^2 and computing \Delta(1/\rho) = -3h^2 / \rho^5 again indicates that the right hand side of the PDEs for the biharmonic and triharmonic RBFs are Dirac delta functions. Since

\lim_{h \to 0}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}-3h^2/(x^2+y^2+z^2+h^2)^{5/2}\operatorname{d}\!x\operatorname{d}\!y\operatorname{d}\!z = -4\pi,

the exact PDEs satisfied by the biharmonic and triharmonic RBFs are \Delta^2 f_{\text{bi}} = -8\pi\delta(x,y,z) and \Delta^3 f_{\text{tri}} = -96\pi\delta(x,y,z).

Polyharmonic smoothing splines

Polyharmonic splines minimize

\sum_{i=1}^N (f(\mathbf{c}_i) - f_i)^2 + \lambda \int\limits_{\mathcal{B} \subset \mathbb{R}^d} |\nabla^m f|^2 \operatorname{d}\!\mathbf{x}

 

 

 

 

(3)

where \mathcal{B} is some box in \mathbb{R}^d containing a neighborhood of all the centers, \lambda is some positive constant, and \nabla^m f is the vector of all mth order partial derivatives of f. For example, in 2D \nabla^1 f = (f_x\ f_y) and \nabla^2 f = (f_{xx} \ f_{xy} \ f_{yx} \ f_{yy}) and in 3D \nabla^2 f = (f_{xx} \ f_{xy} \ f_{xz} \ f_{yx} \ f_{yy} \ f_{yz} \ f_{zx} \ f_{zy} \ f_{zz}). In 2D |\nabla^2 f|^2 = f_{xx}^2 + 2f_{xy}^2 + f_{yy}^2, making the integral the simplified thin plate energy functional.

To show that polyharmonic splines minimize equation (3), the fitting term must be transformed into an integral using the definition of the Dirac delta function:

\sum_{i=1}^N (f(\mathbf{c}_i) - f_i)^2 = \int\limits_{\mathcal{B}}\sum_{i=1}^N (f(\mathbf{x}) - f_i)^2 \delta(\mathbf{x} - \mathbf{c}_i) \operatorname{d}\!\mathbf{x}.

So equation (3) can be written as the functional

J[f] = \int\limits_{\mathcal{B}} F(\mathbf{x},f, \partial^{\alpha_1}f, \partial^{\alpha_2}f, \dots \partial^{\alpha_n}f ) \operatorname{d}\!\mathbf{x} = \int\limits_{\mathcal{B}} \left[ \sum_{i=1}^N (f(\mathbf{x}) - f_i)^2 \delta(\mathbf{x} - \mathbf{c}_i) + \lambda |\nabla^m f|^2 \right] \operatorname{d}\!\mathbf{x}.

where \alpha_i is a multi-index that ranges over all partial derivatives of order m for \mathbb{R}^d. In order to apply the Euler-Lagrange equation for a single function of multiple variables and higher order derivatives, the quantities

{\partial F \over\partial f} = 2\sum_{i=1}^N (f(\mathbf{x}) - f_i) \delta(\mathbf{x} - x_i)

and

\sum_{i=1}^{m^2} \partial^{\alpha_i} {\partial F \over\partial (\partial^{\alpha_i}f)} = 2\lambda \Delta^m f

are needed. Inserting these quantities into the E-L equation shows that

\left( \sum_{i=1}^N (f(\mathbf{x}) - f_i) \delta(\mathbf{x} - \mathbf{c}_i) \right) + (-1)^{m} \lambda \Delta^m f = 0.

 

 

 

 

(4)

Let f be a polyharmonic spline as defined by equation (1). The following calculations will show that f satisfies (4). Applying the \Delta^m operator to equation (1) yields

 \Delta^m f = \sum_{i=1}^N w_i C_{m,d} \delta(\mathbf{x} - \mathbf{c}_i)

where C_{2,2} = 8\pi, C_{2,3}=-8\pi, and C_{3,3}=-96\pi. So

\sum_{i=1}^N \delta(\mathbf{x} - \mathbf{c}_i) (f(\mathbf{x}) - f_i + (-1)^m\lambda C_{m,d} w_i) = 0

 

 

 

 

(5)

is equivalent to (4). Setting

 f(\mathbf{c}_j) - f_j + (-1)^m\lambda C_{m,d} w_j = 0 \quad \text{for} \ j=1,2,\dots,N

 

 

 

 

(6)

(which implies interpolation if \lambda=0) solves equation (5) and hence solves (4). Combining the definition of  f in equation (1) with equation (6) results in almost the same linear system as equation (2) except that the matrix  A is replaced with  A + (-1)^m C_{m,d}\lambda I where  I is the  N\times N identity matrix. For example, for the 3D triharmonic RBFs, A is replaced with A + 96\pi\lambda I.

Explanation of additional constraints

In (2), the bottom half of the system of equations (B^{\textrm{T}}\mathbf{w} = 0) is given without explanation. The explanation first requires deriving a simplified form of \textstyle{ \int_{\mathcal{B}} |\nabla^m f|^2 \operatorname{d}\!\mathbf{x}} when \mathcal{B} is all of \mathbb{R}^d.

First, require that \textstyle{ \sum_{i=1}^N w_i =0. } This ensures that all derivatives of order m and higher of \textstyle{ f(\mathbf{x}) = \sum_{i=1}^N w_i \phi(|\mathbf{x} - \mathbf{c}_i|) } vanish at infinity. For example, let m=3 and d=3 and \phi be the triharmonic RBF. Then \phi_{zzy} = 3y(x^2+y^2) / (x^2+y^2+z^2)^{3/2} (considering \phi as a mapping from \mathbb{R}^3 to \mathbb{R}). For a given center \mathbf{P} = (P_1,P_2,P_3),

\phi_{zzy}(\mathbf{x} - \mathbf{P}) = \frac{3(y-P_2)((y-P_2)^2 + (x-P_1)^2)}{((x-P_1)^2 + (y-P_2)^2 + (z-P_3)^2)^{3/2}}.

On a line \mathbf{x} = \mathbf{a} + t\mathbf{b} for arbitrary point \mathbf{a} and unit vector \mathbf{b},

\phi_{zzy}(\mathbf{x} - \mathbf{P}) = \frac{3(a_2+b_2t - P_2)((a_2+b_2t-P_2)^2 + (a_1+b_1t-P_1)^2)}{((a_1+b_1t-P_1)^2 + (a_2+b_2t-P_2)^2 + (a_3+b_3t-P_3)^2)^{3/2}}.

Dividing both numerator and denominator of this by t^3 shows that \textstyle { \lim_{t \to \infty} \phi_{zyy}(\mathbf{x}-\mathbf{P}) = 3b_2(b_2^2 + b_1^2) / (b_1^2 + b_2^2 + b_3^2)^{3/2}}, a quantity independent of the center \mathbf{P}. So on the given line,

 \lim_{t\rightarrow\infty} f_{zyy}(\mathbf{x}) = \lim_{t\rightarrow\infty}\sum_{i=1}^N w_i \phi_{zyy}(\mathbf{x} - \mathbf{c}_i) = \left(\sum_{i=1}^N w_i\right)3b_2(b_2^2 + b_1^2) / (b_1^2 + b_2^2 + b_3^2)^{3/2} = 0.

It is not quite enough to require that \textstyle{ \sum_{i=1}^N w_i =0, } because in what follows it is necessary for f_{\alpha}g_{\beta} to vanish at infinity, where \alpha and \beta are multi-indices such that |\alpha|+|\beta|=2m-1. For triharmonic \phi, w_i u_j\phi_{\alpha}(\mathbf{x}-\mathbf{c}_i)\phi_{\beta}(\mathbf{x} - \mathbf{d}_j) (where u_j and \mathbf{d}_j are the weights and centers of g) is always a sum of total degree 5 polynomials in x, y, and z divided by the square root of a total degree 8 polynomial. Consider the behavior of these terms on the line \mathbf{x} = \mathbf{a} + t\mathbf{b} as t approaches infinity. The numerator is a degree 5 polynomial in t. Dividing numerator and denominator by t^4 leaves the degree 4 and 5 terms in the numerator and a function of \mathbf{b} only in the denominator. A degree 5 term divided by t^4 is a product of five b coordinates and t. The \textstyle{ \sum w = 0} (and \textstyle{\sum u=0}) constraint makes this vanish everywhere on the line. A degree 4 term divided by t^4 is either a product of four b coordinates and an a coordinate or a product of four b coordinates and a single c_i or d_j coordinate. The \textstyle{ \sum w = 0} constraint makes the first type of term vanish everywhere on the line. The additional constraints \textstyle{ \sum_{i=1}^N w_i \mathbf{c}_i = 0 } will make the second type of term vanish.

Now define the inner product of two functions f,g:\mathbb{R}^d \rightarrow \mathbb{R} defined as a linear combination of polyharmonic RBFs \phi_{m,d} with weights summing to 0 as

<f,g> = \int_{\mathbb{R}^d} (\nabla^m f)\cdot (\nabla^m g) \operatorname{d}\!\mathbf{x}.

Integration by parts shows that

 <f,g> = (-1)^m \int_{\mathbb{R}^d} f (\Delta^mg) \operatorname{d}\!\mathbf{x}.

 

 

 

 

(7)

For example, let  m=2  and  d=2.  Then

<f,g> = \int_{-\infty}^{\infty} \int_{-\infty}^{\infty} (f_{xx}g_{xx} + 2f_{xy}g_{xy} + f_{yy}g_{yy})\operatorname{d}\!x \operatorname{d}\!y.

 

 

 

 

(8)

Integrating the first term of this by parts once yields

\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} f_{xx}g_{xx} \operatorname{d}\!x \operatorname{d}\!y = \int_{-\infty}^{\infty} f_x g_{xx}\big|_{-\infty}^{\infty}\operatorname{d} \! y - \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_x g_{xxx}\operatorname{d}\!x \operatorname{d}\!y = - \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_x g_{xxx}\operatorname{d}\!x \operatorname{d}\!y

since f_x g_{xx} vanishes at infinity. Integrating by parts again results in \textstyle{\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f g_{xxxx}\operatorname{d}\!x \operatorname{d}\!y.}

So integrating by parts twice for each term of (8) yields

 <f,g> = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f (g_{xxxx} + 2g_{xxyy} + g_{yyyy})\operatorname{d}\!x \operatorname{d}\!y = \int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f (\Delta^2 g)\operatorname{d}\!x \operatorname{d}\!y.

Since \textstyle{ (\Delta^m f)(\mathbf{x}) = \sum_{i=1}^N w_i C_{m,d}\delta(\mathbf{x - \mathbf{c}_i}), } (7) shows that

 
\begin{align}
<f,f> &= (-1)^m \int_{\mathbb{R}^d} f(\mathbf{x}) \sum_{i=1}^N w_i (-1)^m C_{m,d}\delta(\mathbf{x - \mathbf{c}_i}) \operatorname{d}\!\mathbf{x}
   = (-1)^m C_{m,d} \sum_{i=1}^N w_i f(\mathbf{c}_i) \\
      &= (-1)^m C_{m,d} \sum_{i=1}^N \sum_{j=1}^N w_i w_j \phi(\mathbf{c}_i - \mathbf{c}_j) = (-1)^m C_{m,d} \mathbf{w}^{\textrm{T}} A \mathbf{w}. 
\end{align}

So if  \textstyle{ \sum_{i=1}^N w_i = 0} and  \mathcal{B} = \mathbb{R}^d,


\int_{\mathcal{B}} |\nabla^m f|^2 \operatorname{d}\!\mathbf{x} = 
(-1)^m C_{m,d} \mathbf{w}^{\textrm{T}} A \mathbf{w}.

 

 

 

 

(9)

Now the origin of the constraints B^{\textrm{T}}\mathbf{w} = 0 can be explained. Here B is a generalization of the B defined above to possibly include monomials up to degree m-1. In other words,

B=\begin{bmatrix} 1 & 1 & \dots & 1\\ \mathbf{c}_1 & \mathbf{c}_2 & \dots & \mathbf{c}_N \\ 
\vdots & \vdots & \dots & \vdots \\
\mathbf{c}_1^{m-1} & \mathbf{c}_2^{m-1} & \dots & \mathbf{c}_N^{m-1}
\end{bmatrix} ^ {\textrm{T}}

where \mathbf{c}_i^j is a column vector of all degree j monomials of the coordinates of \mathbf{c}_i. The top half of (2) is equivalent to A\mathbf{w} + B\mathbf{v} - \mathbf{f} = 0. So to obtain a smoothing spline, one should minimize the scalar field F:\mathbb{R}^{N+d+1}\rightarrow \mathbb{R} defined by


F(\mathbf{w}, \mathbf{v}) = |A\mathbf{w} + B\mathbf{v} - \mathbf{f}|^2 + \lambda C \mathbf{w}^{\textrm{T}} A \mathbf{w}.

The equations


\frac{\partial F}{\partial w_i} = 2 A_{i*} (A\mathbf{w} + B\mathbf{v} - \mathbf{f}) + 2\lambda C A_{i*}\mathbf{w}=0 \quad 
\textrm{for} \ i=1,2,\dots,N

and


\frac{\partial F}{\partial v_i} = 2 B^{\textrm{T}}_{i*} (A\mathbf{w} + B\mathbf{v} - \mathbf{f})=0 \quad
\textrm{for} \ i=1,2,\dots,d+1

(where A_{i*} denotes row i of A) are equivalent to the two systems of linear equations  A(A\mathbf{w} + B\mathbf{v} - \mathbf{f} +\lambda C \mathbf{w}) = 0 and  B^{\textrm{T}}(A\mathbf{w} + B\mathbf{v} - \mathbf{f}) = 0. Since A is invertible, the first system is equivalent to  A\mathbf{w} + B\mathbf{v} - \mathbf{f} +\lambda C \mathbf{w} = 0. So the first system implies the second system is equivalent to B^{\textrm{T}}\mathbf{w} = 0. Just as in the previous smoothing spline coefficient derivation, the top half of (2) becomes (A+\lambda C I)\mathbf{w} + B\mathbf{v} = \mathbf{f}.

This derivation of the polyharmonic smoothing spline equation system did not assume the constraints necessary to guarantee that \textstyle{ \int_{\mathcal{\mathbb{R}}^d} |\nabla^m f|^2 \operatorname{d}\!\mathbf{x}} = C w^{\textrm{T}}Aw. But the constraints necessary to guarantee this, \textstyle{ \sum w = 0} and \textstyle{ \sum w \mathbf{c} = 0 }, are a subset of B^{\textrm{T}}w=0 which is true for the critical point w of F. So \textstyle{ \int_{\mathcal{\mathbb{R}}^d} |\nabla^m f|^2 \operatorname{d}\!\mathbf{x}} = C w^{\textrm{T}}Aw is true for the f formed from the solution of the polyharmonic smoothing spline equation. Because the integral is positive for all w\neq 0, the linear transfomation resulting from the restriction of the domain of linear transformation A to w such that B^T w = 0 must be positive definite. This fact enables transforming the polyharmonic smoothing spline equation system to a symmetric positive definite system of equations that can be solved twice as fast using the Cholesky decomposition.[5]

Examples

The next figure shows the interpolation through four points (marked by "circles") using different types of polyharmonic splines. The "curvature" of the interpolated curves grows with the order of the spline and the extrapolation at the left boundary (x < 0) is reasonable. The figure also includes the radial basis functions phi = exp(-r2) which gives a good interpolation as well. Finally, the figure includes also the non-polyharmonic spline phi = r2 to demonstrate, that this radial basis function is not able to pass through the predefined points (the linear equation has no solution and is solved in a least squares sense).

The next figure shows the same interpolation as in the first figure, with the only exception that the points to be interpolated are scaled by a factor of 100 (and the case phi = r2 is no longer included). Since phi = (scale*r)k = (scalek)*rk, the factor (scalek) can be extracted from matrix A of the linear equation system and therefore the solution is not influenced by the scaling. This is different for the logarithmic form of the spline, although the scaling has not much influence. This analysis is reflected in the figure, where the interpolation shows not much differences. Note, for other radial basis functions, such as phi = exp(-k*r2) with k=1, the interpolation is no longer reasonable and it would be necessary to adapt k.

The next figure shows the same interpolation as in the first figure, with the only exception that the polynomial term of the function is not taken into account (and the case phi = r2 is no longer included). As can be seen from the figure, the extrapolation for x < 0 is no longer as "natural" as in the first figure for some of the basis functions. This indicates, that the polynomial term is useful if extrapolation occurs.

Discussion

The main advantage of polyharmonic spline interpolation is that usually very good interpolation results are obtained for scattered data without performing any "tuning", so automatic interpolation is feasible. This is not the case for other radial basis functions. For example, the Gaussian function e^{-k\cdot r^2} needs to be tuned, so that k is selected according to the underlying grid of the independent variables. If this grid is non-uniform, a proper selection of k to achieve a good interpolation result is difficult or impossible.

Main disadvantages are:

Recently, methods have been developed to overcome the aforementioned difficulties. For example Beatson et al.[6] present a method to interpolate polyharmonic splines at one point in 3 dimensions in O(log(N)) instead of O(N).

See also

References

  1. R.L. Harder and R.N. Desmarais: Interpolation using surface splines. Journal of Aircraft, 1972, Issue 2, pp. 189-191
  2. J. Duchon: Splines minimizing rotation-invariant semi-norms in Sobolev spaces. Constructive Theory of Functions of Several Variables, W. Schempp and K. Zeller (eds), Springer, Berlin, pp.85-100
  3. G.F. Fasshauer G.F.: Meshfree Approximation Methods with MATLAB. World Scientific Publishing Company, 2007, ISPN-10: 9812706348
  4. A. Iske: Multiresolution Methods in Scattered Data Modelling, Lecture Notes in Computational Science and Engineering, 2004, Vol. 37, ISBN 3-540-20479-2, Springer-Verlag, Heidelberg.
  5. 1 2 Powell, M. J. D. (1993). "Some algorithms for thin plate spline interpolation to functions of two variables" (PDF). Cambridge University Dept. of Applied Mathematics and Theoretical Physics technical report. Retrieved January 7, 2016.
  6. R.K. Beatson, M.J.D. Powell, and A.M. Tan A.M.: Fast evaluation of polyharmonic splines in three dimensions. IMA Journal of Numerical Analysis, 2007, 27, pp. 427-450.
This article is issued from Wikipedia - version of the Monday, February 15, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.