Multispectral image
A multispectral image is one that captures image data within specific wavelength ranges across the electromagnetic spectrum. The wavelengths may be separated by filters or by the use of instruments that are sensitive to particular wavelengths, including light from frequencies beyond the visible light range, i.e. infrared and ultra-violet. Spectral imaging can allow extraction of additional information the human eye fails to capture with its receptors for red, green and blue. It was originally developed for space-based imaging,[1] and has also found use in document and painting analysis.
Multispectral imaging measures light in a small number (typically 3 to 15) of spectral bands. Hyperspectral imaging is a special case of spectral imaging where often hundreds of contiguous spectral bands are available.[2]
Applications
Most radiometers for remote sensing (RS) acquire multispectral images. Dividing the spectrum into many bands, multispectral is the opposite of panchromatic, which records only the total intensity of radiation falling on each pixel. Usually, Earth observation satellites have three or more radiometers (Landsat has seven). Each acquires one digital image (in remote sensing, called a 'scene') in a small spectral band. The bands are grouped into wavelength regions based on the origin of the light. The shortest wavelength region is the ultra-violett (wavelengths < 0.4 µm), followed by the visible, or VIS, region, ranging from 0.4 µm to 0.7 µm. The others are the near-infrared with wavelengths from 0.7 µm to 1 µm, followed by the Short-Wave-Infrared (SWIR) from 1 µm to 2.5 µm, the middle infrared (MIR) from 4 µm to 6 µm, and far, or thermal infrared (FIR or thermal) from 8 µm to 14 µm.[3] In the Landsat case, the seven scenes comprise a seven-band multispectral image. Spectral imaging with more numerous bands, finer spectral resolution or wider spectral coverage may be called hyperspectral or ultraspectral.
The technology has also assisted in the interpretation of ancient papyri, such as those found at Herculaneum, by imaging the fragments in the infrared range (1000 nm). Often, the text on the documents appears to the naked eye as black ink on black paper. At 1000 nm, the difference in how paper and ink reflect infrared light makes the text clearly readable. It has also been used to image the Archimedes palimpsest by imaging the parchment leaves in bandwidths from 365-870 nm, and then using advanced digital image processing techniques to reveal the undertext with Archimedes' work.[4]
Multispectral imaging can be employed for investigation of paintings and other works of art.[5] The painting is irradiated by ultraviolet, visible and infrared rays and the reflected radiation is recorded in a camera sensitive in this regions of the spectrum. The image can also be registered using the transmitted instead of reflected radiation. In special cases the painting can be irradiated by UV, VIS or IR rays and the fluorescence of pigments or varnishes can be registered.[6]
Spectral bands
The wavelengths are approximate; exact values depend on the particular satellite's instruments:
- Blue, 450-515..520 nm, is used for atmosphere and deep water imaging, and can reach depths up to 150 feet (50 m) in clear water.
- Green, 515..520-590..600 nm, is used for imaging vegetation and deep water structures, up to 90 feet (30 m) in clear water.
- Red, 600..630-680..690 nm, is used for imaging man-made objects, in water up to 30 feet (9 m) deep, soil, and vegetation.
- Near infrared (NIR), 750-900 nm, is used primarily for imaging vegetation.
- Mid-infrared (MIR), 1550-1750 nm, is used for imaging vegetation, soil moisture content, and some forest fires.
- Far-infrared (FIR), 2080-2350 nm, is used for imaging soil, moisture, geological features, silicates, clays, and fires.
- Thermal infrared, 10400-12500 nm, uses emitted instead of reflected radiation to image geological structures, thermal differences in water currents, and fires, and for night studies.
- Radar and related technologies are useful for mapping terrain and for detecting various objects.
Spectral band usage
For different purposes, different combinations of spectral bands can be used. They are usually represented with red, green, and blue channels. Mapping of bands to colors depends on the purpose of the image and the personal preferences of the analysts. Thermal infrared is often omitted from consideration due to poor spatial resolution, except for special purposes.
- True-color uses only red, green, and blue channels, mapped to their respective colors. As a plain color photograph, it is good for analyzing man-made objects, and is easy to understand for beginner analysts.
- Green-red-infrared, where the blue channel is replaced with near infrared, is used for vegetation, which is highly reflective in near IR; it then shows as blue. This combination is often used to detect vegetation and camouflage.
- Blue-NIR-MIR, where the blue channel uses visible blue, green uses NIR (so vegetation stays green), and MIR is shown as red. Such images allow the water depth, vegetation coverage, soil moisture content, and the presence of fires to be seen, all in a single image.
Many other combinations are in use. NIR is often shown as red, causing vegetation-covered areas to appear red.
Classification
Unlike other Aerial photographic and satellite image interpretation work, these multispectral images do not make it easy to identify directly the feature type by visual inspection. Hence the remote sensing data has to be classified first, followed by processing by various data enhancement techniques so as to help the user to understand the features that are present in the image.
Such classification is a complex task which involves rigorous validation of the training samples depending on the classification algorithm used. The techniques can be grouped mainly into two types.
- Supervised classification techniques
- Unsupervised classification techniques
Supervised classification makes use of training samples. Training samples are areas on the ground for which there is Ground truth, that is, what is there is known. The spectral signatures of the training areas are used to search for similar signatures in the remaining pixels of the image, and we will classify accordingly. This type of classification which uses the training samples for classification is called supervised classification. Expert knowledge is very important in this method since the selection of the training samples and adopting a bias can badly affect the accuracy of classification. One popular technique is the Maximum Likelihood principle. It calculates the probability of a pixel belonging to a class (i.e. feature) and allots the pixel to its most probable class.
In case of unsupervised classification no prior knowledge is required for classifying the features of the image. The natural clustering or grouping of the pixel values, i.e. the gray levels of the pixels, are observed. Then a threshold is defined for adopting the number of classes in the image. The finer the threshold value, the more classes there will be. However, beyond a certain limit the same class will be represented in different classes in the sense that variation in the class is represented. After forming the clusters, ground truth validation is done to identify the class the image pixel belongs to. Thus in this unsupervised classification apriori information about the classes is not required. One of the popular methods in unsupervised classification is K means classifier algorithm.
Multispectral data analysis software
- MicroMSI is endorsed by the NGA.
- Opticks is an open-source remote sensing application.
- Multispec is freeware multispectral analysis software.[7]
- Gerbil is open source multispectral visualization and analysis software.[8]
See also
- Hyperspectral imaging
- Imaging spectroscopy
- Imaging spectrometer
- Liquid crystal tunable filter
- Multispectral pattern recognition
- Remote sensing
- Spy satellite
- Satellite imagery
References
- Harold Hough: Satellite Surveillance, Loompanics Unlimited, 1991, ISBN 1-55950-077-8
- ↑ R.A. Schowengerdt. Remote sensing: Models and methods for image processing, Academic Press, 3rd ed., (2007)
- ↑ Hagen, Nathan; Kudenov, Michael W. "Review of snapshot spectral imaging technologies". Spie. Digital Library. Optical Engineering. Retrieved 2 February 2017.
- ↑ Valerie C. Coffey, Multispectral Imaging Moves into the Mainstream, Optics and Photonics News, April 2012
- ↑ "Multi-spectral imaging of the Archimedes Palimpsest". The Archimedes Palimpsest Project. Retrieved 17 September 2015.
- ↑ . Baronti, A. Casini, F. Lotti, and S. Porcinai, Multispectral imaging system for the mapping of pigments in works of art by use of principal-component analysis, Applied Optics Vol. 37, Issue 8, pp. 1299-1309 (1998)
- ↑ Multispectral imaging at ColourLex
- ↑ "MultiSpec: a tool for multispectral--hyperspectral image data analysis". researchgate.net. 2002-12-01. Retrieved 2017-04-28.
- ↑ "Gerbil - A Novel Software Framework for Visualization and Analysis in the Multispectral Domain". diglib.eg.org. Retrieved 2017-04-28.
External links
- Sc.chula.ac.th
- Academic.emporia.edu
- Multispectral imaging at ColourLex