Integral transform

From Wikipedia, the free encyclopedia

In mathematics, an integral transform is any transform T of the following form:

 (Tf)(u) = \int_{t_1}^{t_2} K(t, u)\, f(t)\, dt.

The input of this transform is a function f, and the output is another function Tf. An integral transform is a particular kind of mathematical operator.

There are numerous useful integral transforms. Each is specified by a choice of the function K of two variables, the kernel function or nucleus of the transform.

Some kernels have an associated inverse kernel K − 1(u,t) which (roughly speaking) yields an inverse transform:

 f(t) = \int_{u_1}^{u_2} K^{-1}(u,t)\, (Tf)(u)\, du.

A symmetric kernel is one that is unchanged when the two variables are permuted.

Contents

[edit] Motivation

Mathematical notation aside, the motivation behind integral transforms is easy to understand. There are many classes of problems that are difficult to solve—or at least quite unwieldy algebraically—in their original representations. An integral transform "maps" an equation from its original "domain" (e.g., functions where time is the independent variable are said to be in the time domain) into another domain. Manipulating and solving the equation in the target domain is, ideally, much easier than manipulation and solution in the original domain. The solution is then mapped back to the original domain with the inverse of the integral transform.

Integral transforms work because they are based upon the concept of spectral factorization over orthonormal bases. What this means is that, other than a few, quite artificial exceptions, arbitrarily complicated functions can be represented as sums of much simpler functions.

[edit] History

The precursor of the transforms were the Fourier series to express functions in finite intervals. Later the Fourier transform was developed to remove the requirement of finite intervals.

Using the Fourier series, just about any practical function of time (the voltage across the terminals of an electronic device, perhaps) can be represented as a sum of sines and cosines, each suitably scaled (multiplied by a constant factor) and shifted (advanced or retarded in time). The sines and cosines in the Fourier series are an example of an orthonormal basis.

[edit] Importance of orthogonality

The individual basis functions have to be orthogonal. That is, the product of two dissimilar basis functions—integrated over their domain—must be zero. An integral transform, in actuality, just changes the representation of a function from one orthogonal basis to another. Each point in the representation of the transformed function in the target domain corresponds to the contribution of a given orthogonal basis function to the expansion. The process of expanding a function from its "standard" representation to a sum of a number of orthonormal basis functions, suitably scaled and shifted, is termed "spectral factorization." This is similar in concept to the description of a point in space in terms of three discrete components, namely, its x, y, and z coordinates. Each axis correlates only to itself and nothing to the other orthogonal axes. Note the terminological consistency: the determination of the amount by which an individual orthonormal basis function must be scaled in the spectral factorization of a function, F, is termed the "projection" of F onto that basis function.

The normal Cartesian graph per se of a function can be thought of as an orthonormal expansion. Indeed, each point just reflects the contribution of a given orthonormal basis function to the sum. Intuitively, the point (3,5) on the graph means that the orthonormal basis function δ(x-3), where "δ" is the Dirac delta function, is scaled up by a factor of five to contribute to the sum in this form. In this way, the graph of a continuous real-valued function in the plane corresponds to an infinite set of basis functions; if the number of basis functions were finite, the curve would consist of a discrete set of points rather than a continuous contour.

[edit] Usage example

As an example of an application of integral transforms, consider the Laplace transform. This is a technique that maps differential or integro-differential equations in the "time" domain into polynomial equations in what is termed the "complex frequency" domain. (Complex frequency is similar to actual, physical frequency but rather more general. Specifically, the imaginary component ω of the complex frequency s = -σ + iω corresponds to the usual concept of frequency, viz., the speed at which a sinusoid cycles, whereas the real component σ of the complex frequency corresponds to the degree of "damping". ) The equation cast in terms of complex frequency is readily solved in the complex frequency domain (roots of the polynomial equations in the complex frequency domain correspond to eigenvalues in the time domain), leading to a "solution" formulated in the frequency domain. Employing the inverse transform, i.e., the inverse procedure of the original Laplace transform, one obtains a time-domain solution. In this example, polynomials in the complex frequency domain (typically occurring in the denominator) correspond to power series in the time domain, while axial shifts in the complex frequency domain correspond to damping by decaying exponentials in the time domain.

The Laplace transform finds wide application in physics and particularly in electrical engineering, where the characteristic equations that describe the behavior of an electric circuit in the complex frequency domain correspond to linear combinations of exponentially damped, scaled, and time-shifted sinusoids in the time domain. Other integral transforms find special applicability within other scientific and mathematical disciplines.

[edit] Table of transforms

Table of integral transforms
Transform Symbol K t1 t2 K − 1 u1 u2
Fourier transform \mathcal{F} \frac{e^{-iut}}{\sqrt{2 \pi}} -\infty\, \infty\, \frac{e^{+iut}}{\sqrt{2 \pi}} -\infty\, \infty\,
Hartley transform \mathcal{H} \frac{\cos(ut)+\sin(ut)}{\sqrt{2 \pi}} -\infty\, \infty\, \frac{\cos(ut)+\sin(ut)}{\sqrt{2 \pi}} -\infty\, \infty\,
Mellin transform \mathcal{M} t^{u-1}\, 0\, \infty\, \frac{t^{-u}}{2\pi i}\, c\!-\!i\infty c\!+\!i\infty
Two-sided Laplace
transform
\mathcal{B} e^{-ut}\, -\infty\, \infty\, \frac{e^{+ut}}{2\pi i} c\!-\!i\infty c\!+\!i\infty
Laplace transform \mathcal{L} e^{-ut}\, 0\, \infty\, \frac{e^{+ut}}{2\pi i} c\!-\!i\infty c\!+\!i\infty
Weierstrass transform \mathcal{W} \frac{e^{-(u-t)^2/4}}{\sqrt{4\pi}}\, -\infty\, \infty\, \frac{e^{+(u-t)^2/4}}{i\sqrt{4\pi}} c\!-\!i\infty c\!+\!i\infty
Hankel transform t\,J_\nu(ut) 0\, \infty\, u\,J_\nu(ut) 0\, \infty\,
Abel transform \frac{2t}{\sqrt{t^2-u^2}} u\, \infty\, \frac{-1}{\pi\sqrt{u^2\!-\!t^2}}\frac{d}{du} t\, \infty\,
Hilbert transform \mathcal{H}il \frac{1}{\pi}\frac{1}{u-t} -\infty\, \infty\, \frac{1}{\pi}\frac{1}{u-t} -\infty\, \infty\,
Identity transform \delta (u-t)\, t_1<u\, t_2>u\, \delta (t-u)\, u_1\!<\!t u_2\!>\!t

In the limits of integration for the inverse transform, c is a constant which depends on the nature of the transform function. For example, for the one and two-sided Laplace transform, c must be greater than the largest real part of the zeroes of the transform function.

[edit] General theory

Although the properties of integral transforms vary widely, they have some properties in common. For example, every integral transform is a linear operator, since the integral is a linear operator, and in fact if the kernel is allowed to be a generalized function then all linear operators are integral transforms (a properly formulated version of this statement is the Schwartz kernel theorem).

The general theory of such integral equations is known as Fredholm theory. In this theory, the kernel is understood to be a compact operator acting on a Banach space of functions. Depending on the situation, the kernel is then variously referred to as the Fredholm operator, the nuclear operator or the Fredholm kernel.

[edit] See also

[edit] References