Fourier optics

From Wikipedia, the free encyclopedia

Fourier optics is one of the three major viewpoints for understanding classical optics, the other two being the diffraction integral viewpoint and geometrical optics. Fourier optics has its origins in the plane wave spectrum or spectral domain technique borrowed from the broader context of general electromagnetic theory (Scott [1989]). The plane wave spectrum stems from the fact that in source-free regions (and virtually all of classical optics pertains to source-free regions), electromagnetic fields may be expressed in terms of a spectrum of propagating and evanescent plane waves. More specifically, Fourier optics refers to optical technologies which arise when the plane wave spectrum viewpoint (section 2) is combined with the Fourier transforming property of quadratic lenses (section 3), to yield 2D image processing devices (section 4) analogous to the 1D signal processing devices common in electronic signal processing. The hallmark of Fourier optics is the use of the spatial frequency domain (kx, ky) as the conjugate of the spatial (x,y) domain, and the use of terms and concepts from 1D signal processing, such as: transform theory, spectrum, bandwidth, window functions, sampling, etc.

Contents

[edit] Electromagnetic Wave Propagation

[edit] The Wave Equation

Fourier optics begins with the homogeneous, scalar wave equation:


\left(\nabla^2-\frac{1}{c^2}\frac{\partial^2}{\partial{t}^2}\right)u(\mathbf{r},t)=0.

where u(r,t) is a real-valued, scalar representation of an electromagnetic wave propagating through free space.

[edit] The Helmholtz Equation

If we next assume that the solution of this equation takes a time-harmonic form, or in other words,

u(\mathbf{r},t) = \mathrm{Re} \left\{  \psi(\mathbf{r}) e^{j\omega t} \right\}

and substitute this expression into the wave equation, we derive the time-independent form of the wave equation, also known as the Helmholtz equation:


\left(\nabla^2+ k^2 \right) \psi (\mathbf{r})=0.

where

k = { \omega \over c} = { 2 \pi \over \lambda }

is the wave number, j is the imaginary unit, and ψ(r) is the time-independent, complex-valued amplitude of the propagating wave.

[edit] The paraxial approximation

We can simplify the complex wave amplitude further by a simple change of variable:

\psi(\mathbf{r}) = A(\mathbf{r}) e^{-j \mathbf{k} \cdot \mathbf{r}}

where

\mathbf{k} = k_x \mathbf{x} + k_y \mathbf{y} + k_z\mathbf{z}

is the wave vector, and

 k = \|\mathbf{k}\| = \sqrt{k_x^2 + k_y^2  + k_z^2} = {\omega \over c}

is the wave number. Next, using the paraxial approximation, we assume that

k_x^2 + k_y^2 \ll k_z^2

or equivalently,

\sin \theta \approx \theta

where θ is the angle between the wave vector k and the z-axis.

As a result,

k \approx k_z

and

\psi(\mathbf{r}) \approx A(\mathbf{r}) e^{-jkz}

[edit] The paraxial wave equation

Substituting this expression into the Helmholtz equation, we derive the paraxial wave equation:

\nabla_T^2 A - 2jk { \partial A \over \partial z} = 0

where

\nabla_T^2 = {\partial^2 \over \partial x^2} + {\partial^2 \over \partial y^2}

is the transverse Laplacian operator.

[edit] The Plane Wave Spectrum: The Basic Foundation of Fourier Optics

The plane wave spectrum concept is the basic foundation of Fourier Optics. For readers already familiar with ray optics, the plane wave spectrum concept might seem unsettling at first. This is because a plane wave spectrum is not as easily visualized in the mind's eye as a light ray is. Almost anyone can mentally picture a curved optical phasefront, with the ray propagation direction being normal to the constant-phase surface. Some might try to imagine the plane wave spectrum as a locally-plane approximation to the curved optical phasefront and, in the far field, this is an appropriate way to visualize what the plane wave spectrum is. The plane wave spectrum is a continuous spectrum of uniform plane waves, and there is one plane wave component in the spectrum for every tangent point on the far-field phase front. The amplitude of that plane wave component would be the amplitude of the optical field at that tangent point. Again, this is true only in the far field, defined as: Range = 2 D2 / λ where D is the maximum linear extent of the optical sources and λ is the wavelength (Scott [1998]). The plane wave spectrum is often regarded as being discrete for certain types of periodic gratings, though in reality, the spectra from gratings are continuous as well.

Readers already familiar with Fourier analysis of electrical signals will find a direct analogy here with the plane wave spectrum representation of optical fields. For example, the plane wave component propagating parallel to the optic axis is analogous to the DC component of an electrical signal. Bandwidth in electrical signals relates to the difference between the highest and lowest frequencies present in the spectrum of the signal. For optical systems, bandwidth is a measure of how far a plane wave is tilted away from the optic axis, so for this reason, this type of bandwidth is often referred to as angular bandwidth or spatial bandwidth. It takes more frequency bandwidth to produce a short pulse in an electrical circuit, and more angular (or, spatial frequency) bandwidth to produce a sharp spot in an optical system (see discussion related to Point spread function).

The plane wave spectrum arises naturally as the eigenfunction solution to the homogeneous electromagnetic wave equation in rectangular coordinates (see also Electromagnetic radiation, which derives the wave equation from Maxwell's equations in source-free media, or Scott [1998]). In the frequency domain, the homogeneous electromagnetic wave equation (some people call this the Helmholtz equation, but it's nothing more than the wave equation in the frequency domain) assumes the form:

  \nabla^2 E_u + k^2E_u = 0

where u = x, y, z and k = 2π/λ, the wavenumber of the medium. We may readily find solutions to this equation in rectangular coordinates by using the principle of separation of variables for partial differential equations. This principle says that in separable orthogonal coordinates, we may construct a so-called elementary product solution to this wave equation of the following form:

  E_u(x, y, z) = f_x(x) \times f_y(y) \times f_z(z)

i.e., a solution which is expressed as the product of a function of x, times a function of y, times a function of z. If we now plug this elementary product solution into the wave equation, using the scalar Laplacian (aka, Laplace operator) in rectangular coordinates

 \nabla^2 E_u = \frac{\partial^2 E_u}{\partial x^2} + \frac{\partial^2 E_u}{\partial y^2} + \frac{\partial^2 E_u}{\partial z^2}

we obtain

f''x(x)fy(y)fz(z) + fx(x)f''y(y)fz(z) + fx(x)fy(y)f''z(z) + k2fx(x)fy(y)fz(z) = 0

which may be rearranged into the form:

   \frac{f''_x(x)}{f_x(x)}+ \frac{f''_y(y)}{f_y(y)} + \frac{f''_z(z)}{f_z(z)} + k^2=0

We may now argue that each of the quotients in the equation above must, of necessity, be constant. For, say the first quotient is not constant, and is a function of x. None of the other terms in the equation has any dependence on the variable x. Therefore, the first term may not have any x-dependence either; it must be constant. Let's call that constant -kx². Reasoning in a similar way for the y and z quotients, we now obtain three ordinary differential equations for the fx, fy and fz, along with one separation condition:

\frac{d^2}{dx^2}f_x(x) + k_x^2 f_x(x)=0
\frac{d^2}{dy^2}f_y(y) + k_y^2 f_y(y)=0
\frac{d^2}{dz^2}f_z(z) + k_z^2 f_z(z)=0
k_x^2+k_y^2+k_z^2= k^2

Each of these 3 differential equations has the same solution, a complex exponential, so that the elementary product solution for Eu is:

E_u(x,y,z)=e^{j(k_x x + k_y y)} e^{\pm j \sqrt{k^2-k_x^2-k_y^2}z}

which represents a propagating or exponentially decaying uniform plane wave solution to the homogeneous wave equation. The - sign is used for a wave propagating/decaying in the +z direction and the + sign is used for a wave propagating/decaying in the -z direction (this follows the engineering time convention, which assumes an ejωt time dependence). This field represents a propagating plane wave when the quantity under the radical is positive, and an exponentially decaying wave when it is negative (in passive media, we always choose the root with a negative imaginary part, to represent decay, not amplification).

A general solution to the homogeneous electromagnetic wave equation in rectangular coordinates is formed as a weighted superposition of elementary plane wave solutions as:

E_u(x,y,z)=\int\!\!\!\int E_u(k_x,k_y) ~ e^{j(k_x x + k_y y)} ~ e^{\pm j \sqrt{k^2-k_x^2-k_y^2}z} ~ dk_x dk_y ~~~~~~~~~~~~~~~~~~(2.1)

where the integrals extend from minus infinity to infinity.

This plane wave spectrum representation of the electromagnetic field is the basic foundation of Fourier Optics (this point cannot be emphasized strongly enough), because we see that when z=0, the equation above simply becomes a Fourier transform (FT) relationship between the field and its plane wave content (hence the name, "Fourier optics").

Fourier optics cannot be understood from the viewpoint of ray optics, because ray optics is the asymptotic, zero wavelength limit of wave optics (it's like trying to understand relativity through Newtonian mechanics). The concept of a ray only exists as the far-field, zero-wavelength limit of the plane wave spectrum. Ray optics is a simplification of wave optics and as such, is a tiny subset of wave optics and therefore, cannot be used to understand it. That is why it can be difficult to try and understand Fourier optics without bringing to bear the broader viewpoint of general electromagnetic theory (i.e., Maxwell's equations). Unfortunately, humans first began to understand optical phenomena through ray-based models, and that approach is still perpetuated in optics instruction today, even though it is now well known (Sommerfeld, Stratton, Born, Mie, Zernike, Airy) that optical phenomena are more accurately understood via the framework of Maxwell's equations, with ray optics being a limiting case (Eikonal equation, Asymptotic expansion).

All spatial dependence of the individual plane wave components is described explicitly via the exponential functions. The coefficients of the exponentials are only functions of spatial wavenumber kx, ky, just as in ordinary Fourier analysis and Fourier transforms.

The equation above may be evaluated asymptotically in the far field (using the principle of stationary phase) to show that the field at the point (x,y,z) is indeed solely due to the plane wave component (kx, ky, kz) which propagates parallel to the vector (x,y,z), and whose plane is tangent to the phasefront at (x,y,z). The mathematical details of this process may be found in Scott [1998] or Scott [1990]. The result of performing a stationary phase integration on the expression above is the following expression,

E_u(r,\theta,\phi)~=~2 \pi j~ (k~\cos\theta)~ \frac{e^{-jkr}}{r}~ E_u(k~\sin\theta~\cos\phi,k~\sin\theta~\sin\phi) ~~~~~~~~~~~~(2.2)

which clearly indicates that the field at (x,y,z) is directly proportional to the spectral component in the direction of (x,y,z), where,

 x = r ~ \sin \theta ~ \cos \phi
 y = r ~ \sin \theta ~ \sin \phi
 z = r ~ \cos \theta ~

Stated another way, the radiation pattern of any planar field distribution is the FT of that source distribution (see Huygens-Fresnel principle, wherein the same equation is developed using a Green's function approach). Note that this is NOT a plane wave, as many might think. The  \frac{e^{-jkr}}{r} radial dependence is a spherical wave - both in magnitude and phase - whose local amplitude is the FT of the source plane distribution at that far field angle. The plane wave spectrum has nothing to do with saying that the field behaves something like a plane wave for far distances.

In addition, we may determine the image plane distribution of an object plane distribution by tracing the progress of the individual plane wave components through the imaging system, and then re-assembling them in the image plane, each with its own particular magnitude and phase. If we were to consider the action of an optical system on each plane wave component in that fashion, then we'd be interested in such "angular frequency domain" figures-of-merit as the optical transfer function of the system.

The separation condition,

k_x^2+k_y^2+k_z^2=k^2

which so closely resembles the equation for the length of a vector in terms of its rectangular components, suggests the notion of k-vector, or wave vector, defined (for propagating plane waves) in rectangular coordinates as

\bold k = k_x \hat \bold x + k_y \hat \bold y + k_z \hat \bold z

and in the spherical coordinate system as

 k_x = k ~ \sin \theta ~ \cos \phi
 k_y = k ~ \sin \theta ~ \sin \phi
 k_z = k ~ \cos \theta ~

We'll make use of these spherical coordinate system relations in the next section.

[edit] Fourier Transforming Property of Lenses

If a transmissive object is placed one focal length in front of a lens, then its Fourier transform will be formed one focal length behind the lens. We may show this using what we now know about the plane wave spectrum representation of the transmittance function in the front focal plane. Consider the figure to the right (click to enlarge)

On the Fourier transforming property of lenses
On the Fourier transforming property of lenses

In this figure, we assume a plane wave incident from the left. The transmittance function in the front focal plane (i.e., Plane 1) spatially modulates the incident plane wave in magnitude and phase, like on the left-hand side of eqn. (2.1), and in so doing, produces a spectrum of plane waves corresponding to the FT of the transmittance function, like on the right-hand side of eqn. (2.1). The various plane wave components propagate at different tilt angles with respect to the optic axis of the lens (i.e., the horizontal axis). The finer the features in the transparency, the broader the angular bandwidth of the plane wave spectrum. We'll consider one such plane wave component, propagating at angle θ with respect to the optic axis. We'll assume θ is small (paraxial approximation), so that

\frac{k_x}{k} = \sin \theta \cong \theta

and

\frac{k_z}{k} = \cos \theta \cong 1 - \frac{\theta^2}{2}

and

 \frac{1}{\cos \theta} \cong \frac{1}{1 - \frac{\theta^2}{2}} \cong 1 + \frac{\theta^2}{2}

In the figure, the plane wave phase, moving horizontally from the front focal plane to the lens plane, is

 e^{j k f \cos \theta} \,

and the spherical wave phase from the lens to the spot in the back focal plane is:

 e^{j k f / \cos \theta} \,

and the sum of the two path lengths is f (1 + θ2 + 1 - θ2) = 2f i.e., it is a constant value, independent of tilt angle, θ, for paraxial plane waves. Each paraxial plane wave component of the field in the front focal plane appears as a PSF spot in the back focal plane, with an intensity and phase equal to the intensity and phase of the original plane wave component in the front focal plane. In other words, the field in the back focal plane is the Fourier transform of the field in the front focal plane.

The beauty of performing a Fourier transform optically is that all FT components are computed simultaneously - in parallel - at the speed of light. As an example, light travels at a speed of roughly 1 ft. / ns, so if a lens has a 1 ft. focal length, an entire 2D FT can be computed in about 2 ns (2 x 10-9 seconds). If the focal length is 1 in., then the time is under 200 ps. No electronic computer can compete with these kinds of numbers, or perhaps ever hope to (although digital computers such as the petaflop IBM Roadrunner may actually prove faster than optics, as improbable as that seems). The disadvantage is that, as the derivation shows, the FT relationship only holds for paraxial plane waves, so this FT "computer" is inherently bandlimited. On the other hand, since the wavelength of visible light is so minute in relation to even the smallest visible feature dimensions in the image i.e.,

 k^2 >>  k_x ^2 + k_y ^2

(for all kx, ky within the spatial bandwidth of the image, so that kz is nearly equal to k), the paraxial approximation is not terribly limiting in practice. And, of course, this is an analog - not a digital - computer, so precision is limited. Also, phase can be challenging to extract; often it is inferred interferometrically.

Side note 1. Object truncation and Gibbs phenomenon

We should recognize that the spatially modulated electric field, shown on the left-hand side of eqn. (2.1), typically only occupies a finite (usually rectangular) aperture in the x,y plane. The rectangular aperture function acts like a 2D square-top pulse function, and we usually assume the field to be zero outside this 2D rectangle. So, our spatial domain integrals for calculating the FT coefficients on the right-hand side of eqn. (2.1) are truncated at the boundary of this aperture. This step truncation can introduce inaccuracies in both theoretical calculations and measured values of the plane wave coefficients on the RHS of eqn. (2.1).

Whenever a function is discontinuously truncated in one FT domain, broadening and rippling are introduced in the other FT domain. A perfect example from optics is in connection with the Point spread function, which for on-axis plane wave illumination of a quadratic lens (with circular aperture), is an Airy function, J1(x)/x. Literally, the point source has been "spread out" (with ripples added), to form the Airy point spread function. This source of error is known as Gibbs phenomenon and it may be mitigated by simply ensuring that all significant content lies near the center of the transparency, or through the use of window functions which smoothly taper the field to zero at the frame boundaries. By the convolution theorem, the FT of an arbitrary transparency function - multiplied (or truncated) by an aperture function - is equal to the FT of the non-truncated transparency function convolved against the FT of the aperture function, which in this case becomes a type of "Greens function" or "impulse response function" in the spectral domain. The FT of a circular aperture function is J1(x)/x and the FT of a rectangular aperture function is a product of sinc functions, sin x/x.

Side note 2. Fourier analysis and functional decomposition

Even though the input transparency only occupies a finite portion of the x-y plane (Plane 1), the uniform plane waves comprising the plane wave spectrum occupy the entire x-y plane. This is why (for this purpose) we need only consider the plane wave phase in the z-direction (from Plane 1 to Plane 2), and not the phase transverse to the z-direction. It is of course, very tempting to think that if a plane wave emanating from the finite aperture of the transparency is tilted too far from horizontal, it will somehow "miss" the lens altogether but again, since the uniform plane wave extends infinitely far in all directions in the transverse (x-y) plane, we do not have to worry about any of the plane wave components "missing" the lens.

This issue brings up perhaps the predominant difficulty with Fourier analysis, namely that we are trying to represent a function defined over a finite support (namely our input transparency, defined on its own aperture) with other functions (sinusiods) which have infinite support (i.e., they are defined over the entire x-y plane). This is unbelievably inefficient computationally, and is the principal reason why wavelets were conceived, that is, to represent a function (defiined on a finite interval or area) in terms of oscillatory functions which are also defined over finite intervals or areas. That way, instead of getting the frequency content of the entire image all at once (along with the frequency content of the entire rest of the x-y plane, over which the image has zero value), we get instead the frequency content of different parts of the image, which is usually much simpler. Unfortunately, at least as far as this author knows, wavelets in the x-y plane don't correspond to any known type of propagating wave function, in the same way that Fourier's sinusoids correspond to plane wave functions. However the FTs of most wavelets are well known - maybe they can be shown to be equivalent to some useful type of propagating field.

On the other hand, sinc and Airy functions - which are not only the point spread functions of rectangular and circular apertures, respectively, but are also cardinal functions commonly used for functional decomposition in interpolation/sampling theory [Scott 1990] - do correspond to converging or diverging spherical waves, and therefore could potentially be implemented as a whole new functional decomposition of the object plane function, thereby leading to another point of view similar in nature to Fourier optics. This would basically be the same as conventional ray optics, but with diffraction effects included. In this case, each point spread function would be a type of "smooth pixel," in much the same way that a soliton on a fiber is a "smooth pulse."

Perhaps a lens figure-of-merit in this "point spread function" viewpoint would be to ask how well a lens transforms an Airy function in the object plane into an Airy function in the image plane, as a function of radial distance from the optic axis, or as a function of the size of the object plane Airy function. This is kind of like the Point spread function, except now we're really looking at it as a kind of input-to-output plane transfer function (like MTF), and not so much in absolute terms, relative to a perfect point. Similarly, Gaussian wavelets, which would correspond to the waist of a propagating Gaussian beam, could also potentially be used in still another functional decomposition of the object plane field.

Side note 3. Lens as a low-pass filter

A lens is basically a low-pass spatial filter (see Low-pass filter), or alternatively, a low-pass plane wave filter. This means that it tends to pass (from the object plane over onto the image plane) paraxial plane waves better than it does wide-angle plane waves. Blurring and loss of sharpness are due to loss of high (spatial) frequency content.

Side note 4. Far-field and the 2 D2 / λ criterion

In the figure above, illustrating the Fourier transforming property of lenses, the lens is in the near field of the object plane transparency, therefore we may regard the object plane field as being a superposition of plane waves, each one of which propagates to the lens. We see this as follows, via the far-field criterion, defined as: Range = 2 D2 / λ where D is the maximum linear extent of the optical sources and λ is the wavelength. D of the transparency is on the order of cm (10-2 m) and the wavelength of light is on the order of 10-6 m, therefore D/λ is on the order of 104. This times D is on the order of 102 m, or hundreds of meters. On the other hand the far field distance from a PSF spot is on the order of λ , since D for the spot is on the order of λ, so that D/λ is on the order of unity. One times D (λ) is on the order of λ (10-6 m). Since the lens is in the far field of a PSF spot, we may regard the field from the spot as being an asymptotic spherical wave, as in eqn. (2.2), not as a plane wave spectrum, as in eqn. (2.1). The lens is in the far-field of any PSF spot, but in the near field of the entire transparency.

Side note 5. Coherence and Fourier Transforming

Whenever we work in the frequency domain, with an assumed ejωt time dependence, we are implicitly assuming coherent (laser) light. Light of different frequencies will "spray" tha plane wave spectrum out at different angles, and as a result these plane wave components will be focused at different places in the output plane. The Fourier transforming property of lenses works best with coherent light, unless there is some special reason to combine light of different frequencies, to achieve some special purpose.

[edit] 4F Correlator

As stated in the introduction, when the plane wave spectrum representation of the electric field (section 2) is combined with the Fourier transforming property of quadratic lenses (section 3), it leads naturally to the development of numerous 2D image processing devices. One of the primary applications of Fourier Optics is in the mathematical operations of cross-correlation and convolution, and these have historically been done with a device known as a 4F correlator, shown in the figure below (click to enlarge).

4F Correlator
4F Correlator

The 4F correlator is based on the convolution theorem from Fourier transform theory, which states that convolution in the spatial (x,y) domain is equivalent to direct multiplication in the spatial frequency (kx, ky) domain. Once again, a plane wave is assumed incident from the left and a transparency containing one 2D function, f(x,y), is placed in the input plane of the correlator, located one focal length in front of the first lens. The transparency spatially modulates the incident plane wave in magnitude and phase, like on the left-hand side of eqn. (2.1), and in so doing, produces a spectrum of plane waves corresponding to the FT of the transmittance function, like on the right-hand side of eqn. (2.1). That spectrum is then formed as an "image" one focal length behind the first lens, as shown. A transmission mask containing the FT of the second function, g(x,y), is placed in this same plane, one focal length behind the first lens, causing the transmission through the mask to be equal to the product, F(kx,ky) x G(kx,ky). This product now lies in the "input plane" of the second lens (one focal length in front), so that the FT of this product (i.e., the convolution of f(x,y) and g(x,y)), is formed in the back focal plane of the second lens.

[edit] Afterword: Plane Wave Spectrum Within the Broader Context of Functional Decomposition

Electrical fields are really just particular types of mathematical functions and, as such, may often be represented in many different ways. In the Huygens-Fresnel or Stratton-Chu viewpoints, we represent the electric field as a superposition of point sources, each one of which gives rise to a Green's function field. The total field is then the weighted sum of all of the individual Greens function fields. That seems to be the most natural way of viewing the electric field for most people - no doubt because most of us have, at one time or another, drawn out the circles with protractor and paper, much the same way Thomas Young did in his classic paper. However, it is by no means the only way to represent the electric field. As we have seen herein, the field may also be represented as a spectrum of sinusoidally varying plane waves. In addition, Frits Zernike proposed still another functional decomposition based on his Zernike polynomials, defined on the unit disc. The third-order (and lower) Zernike polynomials correspond to the normal lens aberrations. And still another functional decomposition could be made in terms of sinc and Airy functions, as in sampling theory. All of these functional decompositions have utility in different circumstances. The optical scientist having access to these various different representational forms has available a richer insight to the nature of these marvelous fields and their properties. The reader is encouraged to embrace these different ways of looking at the field, rather than viewing them as being in any way conflicting or contradictory. Then the true beauty of optics begins to unfold.

[edit] Applications

Fourier optics is used in the field of optical information processing, the staple of which is the classical 4F processor.

The Fourier transform properties of a lens provide numerous applications in optical signal processing such as spatial filtering, optical correlation and computer generated holograms.

Fourier optical theory is used in interferometers, optical tweezers, atom traps, and quantum computing. Concepts of Fourier optics are used to reconstruct the phase of light intensity in the spatial frequency plane (see adaptive-additive algorithm).

[edit] See also

[edit] References

  • Goodman, Joseph (2005). Introduction to Fourier Optics, 3rd ed,, Roberts & Co Publishers. ISBN 0974707724.  or online here
  • Hecht, Eugene (1987). Optics, 2nd ed., Addison Wesley. ISBN 0-201-11609-X. 
  • Wilson, Raymond (1995). Fourier Series and Optical Transform Techniques in Contemporary Optics. Wiley. ISBN 0471303577. 
  • Scott, Craig (1998). Introduction to Optics and Optical Imaging. Wiley. ISBN 0-7803-3440-X. 
  • Scott, Craig (1990). Modern Methods of Reflector Antenna Analysis and Design. Artech House. ISBN 0-89006-419-9. 
  • Scott, Craig (1989). The Spectral Domain Method in Electromagnetics. Artech House. ISBN ---.