Exponential integrator

Not to be confused with exponential integral of exponential functions.

Exponential integrators are a class of numerical methods for the solution of partial and ordinary differential equations. This large class of methods from numerical analysis is based on the exact integration of the linear part of the initial value problem described later in this article. Because the linear part is integrated exactly, this can help to mitigate the stiffness of a differential equation. Exponential integrators can be constructed to be explicit or implicit for numerical ordinary differential equations or serve as the time integrator for numerical partial differential equations.

Background

Dating back to at least the 1960s, these methods were recognized by Certain[1] and Pope.[2] As of late exponential integrators have become an active area of research. Originally developed for solving stiff differential equations, the methods have been used to solve partial differential equations including hyperbolic as well as parabolic problems[3] such as the heat equation.

Introduction

We consider initial value problems of the form,

u'(t) = L u(t) + N(u(t) ), \qquad u(t_0)=u_0, \qquad \qquad (1)

where L is composed of linear terms, and  N is composed of the non-linear terms. These problems can come from a more typical initial value problem

u'(t) = f(u(t)), \qquad u(t_0)=u_0,

after linearizing locally about a fixed or local state y^*:

 L = \frac{\partial f}{\partial u}(u^*); \qquad N = f(u) - L u.

Here, \frac{\partial f}{\partial u} refers to the partial derivative of f with respect to y.

Exact integration of this problem from time 0 to a later time t can be performed using matrix exponentials to define an integral equation for the exact solution:[4]

 u(t) = e^{L t } u_0 + \int_{0}^{t} e^{ L (t-\tau) } N\left( u\left( \tau \right) \right)\, d\tau. \qquad (2)

This is similar to the exact integral used in the Picard–Lindelöf theorem. In the case of N\equiv 0, this formulation is the exact solution to the linear differential equation.

Numerical methods require a discretization of equation (2). They can be based on Runge-Kutta discretizations,[5][6] linear multistep methods or a variety of other options.


Exponential Rosenbrock methods

In order to improve the stability when integrating the nonlinearity N(u(t)), one can use a continuous linearization of (1) along the numerical solution u_n to get


u'(t)=L_{n} u(t)+ N_n(u(t)), \qquad u(t_0)=u_0, \qquad (3)

where  
L_{n}=\frac{\partial f}{\partial u}(u_n),  N_n (u)=f(u)-L_{n} u.
This offers a great advantage which is  
\frac{\partial N_n}{\partial u}(u_n)= 0.
Again, applying the variation-of-constants formula (2) gives the exact solution at time t_{n+1} as


u(t_{n+1})= e^{h_n L_n}u(t_n) + \int_{0}^{h_n} e^{(h_n-\tau) L_n}  N_n ( u(t_n+\tau ))  d\tau.  \qquad (4)

The idea is to approximate the integral in (4) by some quadrature rule with nodes c_i and weights b_i(h_n L_n) (1\leq i\leq s). This yields exponential Rosenbrock methods, see :[4]


U_{ni}= e^{c_i h_n L_n}u_n +h_n \sum_{j=1}^{s}a_{ij}(h_n L_n) N_n( U_{nj}),

u_{n+1} = e^{h_n L_n}u_n +  h_n \sum_{i=1}^{s}b_{i}(h_n L_n)N_n(U_{ni}).

By introducing the difference D_{ni}=N_n (U_{ni})-N_n (u_n), they can be reformulated in a more efficient way (for implementation) as


U_{ni}= u_n + c_i h_n \varphi _{1} ( c_i h_n L_n)f(u_n) +h_n \sum_{j=2}^{i-1}a_{ij}(h_n L_n) D_{nj},

u_{n+1}= u_n + h_n \varphi _{1} ( h_n L_n)f(u_n) + h_n \sum_{i=2}^{s}b_{i}(h_n L_n) D_{ni}.

Examples

See also: the first-order exponential integrator for more details.

First-order forward Euler exponential integrator

The simplest method is based on a forward Euler time discretization. It can be realized by holding the term \mathcal{N}( y(\tau) ) \approx \mathcal{N}( y(0) ) constant over the whole interval. Exact integration of e^{ L (t-\tau) } then results in the

y(t) = e^{L t}y_0 + L^{-1} (e^{L t} - 1) \mathcal{N}( y( t_0 ) ).

Of course, this process can be repeated over small intervals to serve as the basis of a single-step numerical method.

In general, one defines a sequence of functions,

 \varphi_0( z ) = e^z; \qquad \varphi_1( z ) = \frac{e^z - 1}{z}, \qquad \varphi_2(z) = \frac{e^z - 1-z}{z^2}.

that show up in these methods. Usually, these linear operators are not computed exactly, but a Krylov subspace iterative method can be used to efficiently compute the multiplication of these operators times vectors.[7] See references for further details of where these functions come from.[5][8]

Fourth-order ETDRK4 method of Cox and Mathews

Cox and Mathews[9] describe a fourth-order method exponential time differencing (ETD) method that they used Maple to derive.

We use their notation, and assume that the unknown function is u, and that we have a known solution u_n at time t_n. Furthermore, we'll make explicit use of a possibly time dependent right hand side: \mathcal{N} = \mathcal{N}( u, t ).

Three stage values are first constructed:


a_n = e^{ L h / 2 } u_n + L^{-1} \left( e^{Lh/2} - I \right) \mathcal{N}( u_n, t_n )

b_n = e^{ L h / 2 } u_n + L^{-1} \left( e^{Lh/2} - I \right) \mathcal{N}( a_n, t_n + h/2 )

c_n = e^{ L h / 2 } a_n + L^{-1} \left( e^{Lh/2} - I \right) \left( 2 \mathcal{N}( b_n, t_n + h/2 ) - \mathcal{N}(u_n,t_n) \right)

The final update is given by,


u_{n+1} = e^{L h} u_n + h^{-2} L^{-3} \left\{ 
\left[ -4 - Lh + e^{Lh} \left( 4 - 3 L h + (L h)^2 \right)  \right] \mathcal{N}( u_n, t_n ) +
2 \left[ 2 + L h + e^{Lh} \left( -2 + L h \right) \right] \left( \mathcal{N}( a_n, t_n+h/2 ) + \mathcal{N}( b_n, t_n + h / 2 ) \right) +
\left[ -4 - 3L h - (Lh)^2 + e^{Lh} \left(4 - Lh \right) \right] \mathcal{N}( c_n, t_n + h )
\right\}.

If implemented naively, the above algorithm suffers from numerical instabilities due to floating point round-off errors.[10] To see why, consider the first function,

 \varphi_1( z ) = \frac{ 1 - e^z }{ z },

which is present in the first-order Euler method, as well as all three stages of ETDRK4. For small values of z, this function suffers from numerical cancellation errors. However, these numerical issues can be avoided by evaluating the  \varphi_1 function via a contour integral approach [10] or by a Padé approximant.[11]

Applications

Exponential integrator are used for the simulation of stiff scenarios in scientific and visual computing, for example in molecular dynamics,[12] for VLSI circuit simulation,[13] and in computer graphics.[14] They are also applied in the context of hybrid monte carlo methods.[15] In these applications, exponential integrators show the advantage of large time stepping capability and high accuracy. To accelerate the evaluation of matrix functions in such complex scenarios, exponential integrators are often combined with Krylov subspace projection methods.

See also

Notes

  1. Certain (1960)
  2. Pope (1963)
  3. Hochbruck and Ostermann, (2006)
  4. 1 2 Hochbruck and Ostermann, (2010)
  5. 1 2 Cox and Mathews (2002)
  6. Tokman (2006, 2011)
  7. Tokman (2006, 2010)
  8. Hochbruck and Ostermann (2010)
  9. Cox and Mathews 2002
  10. 1 2 Kassam and Trefethen (2005)
  11. Berland et al. (2007)
  12. Michels and Desbrun (2015)
  13. Zhuang et al. (2014)
  14. Michels et al. (2014)
  15. Chao et al. (2015)

References

    External links

    This article is issued from Wikipedia - version of the Monday, February 15, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.