User:Petergans

From Wikipedia, the free encyclopedia

Numerical smoothing and differentiation

Contents

[edit] Introduction

An experimental datum value can be conceptually described as the sum of a signal and some noise, but in practice the two contributions cannot be separated. The purpose of smoothing is to increase the Signal-to-noise ratio without greatly distorting the signal. One way to achieve this is by fitting successive sets of m data points to a polynomial of degree less than m by the method of linear least squares. Once the coefficients of the smoothing polynomial have been calculated they can be used to give estimates of the signal or its derivatives.

[edit] Convolution coefficients

When the data points are equally spaced a relatively simple analytical solution to the least-squares equations can be found. This solution forms the basis of the convolution method of numerical smoothing and differentiation.

Suppose that the data consists of a set of n {xi, yi} points (i = 1...n), where x is an independent variable and yi is an observed value. A polynomial will be fitted to a set of m (an odd number) adjacent data points separated by an interval h. Firstly, a change of variable is made

z = {{x - \bar x} \over h}

where {\bar x} is the value of the central point. z takes the values (1-m)/2 .. 0 .. (m-1)/2. The polynomial, of degree k is defined as

Y = a_0  + a_1 z + a_2 z^2  \cdots  + a_k z^k

The coefficients a0, a1 etc. are obtained by solving the normal equations

{\mathbf{a}} = \left( {{\mathbf{J}}^{\mathbf{T}} {\mathbf{J}}} \right)^{ - {\mathbf{1}}} {\mathbf{J}}^{\mathbf{T}} {\mathbf{y}}

where the ith row of the Jacobian matrix J has the values {1 zi zi2zk}. For example, for a quadratic polynomial fitted to 5 points

{\mathbf{J^T}} =  \begin{pmatrix}   1 & 1 & 1 & 1 & 1  \\     { - 2} & { - 1} & 0 & 1 & 2  \\     4 & 1 & 0 & 1 & 4  \\ \end{pmatrix}
\begin{pmatrix} {a_0 }  \\ {a_1 }  \\  {a_2 }  \\\end{pmatrix}= \begin{pmatrix}5 & 0 & 10\\ 0 & 10 & 0 \\ 10 & 0 & 34 \\ \end{pmatrix}^{-1} \begin{pmatrix}   1 & 1 & 1 & 1 & 1  \\     { - 2} & { - 1} & 0 & 1 & 2  \\     4 & 1 & 0 & 1 & 4  \\ \end{pmatrix} \begin{pmatrix}{y_1 }\\{y_2 }\\{y_3 }\\{y_4 }\\{y_5 }\\\end{pmatrix}
\begin{pmatrix} {a_0 }  \\ {a_1 }  \\  {a_2 }  \\\end{pmatrix}= \begin{pmatrix}    { - 3/35} & {12/35} & {17/35} & {12/35} & { - 3/35}  \\     { - 2/10} & { - 1/10} & 0 & {1/10} & {2/10}  \\     {1/7} & { - 5/7} & { - 10/7} & { - 5/7} & {10/7}  \\  \end{pmatrix}  \begin{pmatrix}{y_1 }  \\ {y_2 }  \\ {y_3 }  \\ {y_4 }  \\ {y_5 }  \\\end{pmatrix}

In this example, a0 = ( − 3y1 + 12y2 + 17y3 + 12y4 − 3y5) / 35. This is the smoothed value for the central point (z = 0) of the five data points used in the calculation. The coefficients (-3 12 17 12 -3)/35 are known as convolution coefficients as they are applied in succession to sets of m points at a time.

Y_j=\sum{C_1y_{j-i} ...+ C_{(m+1)/2}y_j ... +C_my_{j+i}}{;  i=(m-1)/2}


Tables of convolution coefficients were published by Savitzky and Golay in 1964, though the procedure for calculating them was known in the 19th. century (See E. T. Whittaker and G. Robinson, The Calculus of Observations)

The numerical derivatives are obtained by differentiating Y. For a cubic polynomial

\frac{{dY}} {{dx}} = \frac{1} {h}\left( {a_1  + 2a_2 z + 3a_3 z^2 } \right) = \frac{1} {h}a_1 {\text{ at }}z = {\text{0}}
\frac{{d^2 Y}} {{dx^2 }} = \frac{1} {{h^2 }}\left( {2a_2  + 6a_3 z} \right) = \frac{2} {h^2}a_2 {\text{ at }}z = {\text{0 }}
\frac{{d^3 Y}} {{dx^3 }} = \frac{6} {{h^3 }}a_3 {\text{ }}

It is not necessary always to use the Savitzky-Golay tables as algebraic formulae can be derived for the convolution coefficients. For a cubic polynomial the expressions are

C_{0j}  = \frac{ {\left( {3m^2  - 7 - 20j^3 } \right)/4}} {{m\left( {m^2  - 4} \right)/3}}
C_{1j}  = \frac{ {5\left( {3m^4  - 18m^2  + 31} \right)j - 28\left( {3m^2  - 7} \right)j^3 }} {{m\left( {m^2  - 1} \right)\left( {3m^4  - 39m^2  + 108} \right)/15}}
C_{2j}  = \frac{{12mj^2  - m\left( {m^2  - 1} \right)}} {{m\left( {m^2  - 4} \right))/15}}
C_{3j}  = \frac{{ - \left( {3m^2  - 7} \right)j + 20j^3 }} {{m\left( {m^2  - 1} \right)\left( {3m^4  - 39m^2  + 108} \right)/420}}

[edit] Signal distortion and noise reduction

It is inevitable that the signal will be distorted in the convolution process. Both the extent of the distortion and signal-to-noise improvement:

  • decrease as the degree of the polynomial increases
  • increase as the width, m of the convolution function increases

For example, If the noise in all data points has a constant Standard deviation, σ, when smoothing by a m-point linear polynomial the standard deviation on the noise will be decreased to \sqrt{1\over{m}}  \sigma, but with a quadratic polynomial it reduces to approximately \sqrt{9\over{4m}}  \sigma. So, for a 9-point quadratic smooth only half the noise is removed.

[edit] Frequency characteristics of convolution filters

Convolution maps to multiplication in the Fourier co-domain. The (finite) Fourier transform of a convolution filter shows that it is most efficient for high-frequency noise and can therefore be described as a low-pass filter. The noise that is not removed is primarily low-frequency noise.

[edit] Applications

  • Smoothing by convolution is performed primarily for aesthetic reasons.
  • Location of Maxima and minima in experimental data curves. The first derivative of a function is zero at a maximum or minimum.
  • Location of an end-point in a Titration curve. An end-point is an Inflection point where the second derivative of the function is zero.
  • Resolution enhancement in spectroscopy. Bands in the second derivative of a spectroscopic curve are narrower than the bands in the spectrrum: they have reduced Half-width. This allows partially overlapping bands to be "resolved" into separate peaks.

[edit] See also

[Savitsky-Golay_filter] (in Dutch, but with good illustrations)

[edit] Reference

P. Gans, Data fitting in the chemical sciences, Wiley, 1992.