Template matching
From Wikipedia, the free encyclopedia
Template matching is a technique in Digital image processing for finding small parts of an image which match a template image. It can be used in manufacturing as a part of quality control,[1] a way to navigate a mobile robot,[2] or as a way to detect edges in images.[3]
Contents |
[edit] Approach
There are different approaches to accomplish template matching. Some are faster performers than others, and some find better matches.
The basic method of template matching uses a convolution mask (template), tailored to a specific feature of the search image, which we want detect. This technique can be easily performed on grey images or edge images. It is intuitively likely that the convolution output will be highest at places where the image structure matches the mask structure, where large image values get multiplied by large mask values.
This method is normally implemented by firstly picking out a part of the search image to use as a template: We will call the search image S(x, y), where (x, y) represent the coordinates of each pixel in the search image. We will call the template T(x t, y t), where (xt, yt) represent the coordinates of each pixel in the template. We then simply move the center (or the origin) of the template T(x t, y t) over each (x, y) point in the search image and calculate the sum of products between the coefficients in S(x, y) and T(xt, yt) over the whole area spanned by the template. As all possible positions of the template with respect to the search image are considered, the position with the highest score is the best position. This method is sometimes referred to as 'Linear Spatial Filtering' and the template is called a filter mask.
For example, one way to handle translation problems on images, using template matching is to compare the intensities of the pixels, using the SAD (Sum of absolute differences) measure.[4]
A pixel in the search image with coordinates (x, y) has intensity I(x, y) and a pixel in the template with coordinates (xt, yt) has intensity I(x t, y t). Thus the absolute difference in the pixel intensities is defined as Diff(x, y) = | I(x, y) – I(x t, y t) |.
The mathematical representation of the idea about looping through the pixels in the search image as we translate the origin of the template at every pixel and take the SAD measure is the following:
Srows and Scols denote the rows and the columns of the search image and Trows and Tcols denote the rows and the columns of the template image, respectively. In this method the lowest SAD score gives the estimate for the best position of template within the search image. The method is simple to implement and understand, but it is one of the slowest methods.
[edit] Implementation
In this simple implementation, it is assumed that the above described method is applied on grey images: This is why Grey is used as pixel intensity.
minSAD = VALUE_MAX; // loop through the search image for ( int x = 0; x <= S_rows - T_rows; x++ ) { for ( int y = 0; y <= S_cols - T_cols; y++ ) { SAD = 0.0; // loop through the template image for ( int i = 0; i < T_rows; i++ ) for ( int j = 0; j < T_cols; j++ ) { pixel p_SearchIMG = S[x+i][y+j]; pixel p_TemplateIMG = T[i][j]; SAD += abs( p_SearchIMG.Grey - p_TemplateIMG.Grey ); } // save the best found position if ( minSAD > SAD ) { minSAD = SAD; position.bestRow = x; position.bestCol = y; position.bestSAD = SAD; } } }
One way to perform template matching on color images is to decompose the pixels into their color components and measure the quality of match between the color template and search image using the sum of the SAD computed for each color separately.
[edit] Example
[edit] Speeding up the Process
In the past, this type of spatial filtering was normally only used in dedicated hardware solutions because of the computational complexity of the operation[5], however we can lessen this complexity by filtering it in the frequency domain of the image, referred to as 'frequency domain filtering,' this is done through the use of the convolution theorem.
Another way of speeding up the matching process is through the use of an image pyramid. This is a series of images, at different scales, which formed by repeatedly filtering and subsampling the original image in order to generate a sequence of reduced resolution images[6]. These lower resolution images can then be searched for the a template (with a similarly reduced resolution), in order to yield possible start positions for searching at the larger scales. The larger images can then be searched in a small window around the start position to find the best template location.
Other methods can handle problems such as translation, scale and image rotation.[7] [8]
[edit] Improving the performance of the matching
Improvements can be made to the matching method by using more than one template, these other templates can have different scales and rotations.
[edit] Similar Methods
Other methods which are similar include 'Stereo matching,' 'Image registration' and 'Scale-invariant feature transform.'
[edit] Examples of Use
Template matching has various different applications and is used in such fields as face recognition (see facial recognition system) and medical image processing. Systems have been developed and used in the past to count the number of faces that walk across part of a bridge within a certain amount of time. Other systems include automated calcified nodule detection within digital chest X-rays.[9]
[edit] References
- ^ Aksoy, M. S., O. Torkul, and I. H. Cedimoglu. "An industrial visual inspection system that uses inductive learning." Journal of Intelligent Manufacturing 15.4 (August 2004): 569(6). Expanded Academic ASAP. Thomson Gale.
- ^ Kyriacou, Theocharis, Guido Bugmann, and Stanislao Lauria. "Vision-based urban navigation procedures for verbally instructed robots." Robotics and Autonomous Systems 51.1 (April 30, 2005): 69-80. Expanded Academic ASAP. Thomson Gale.
- ^ WANG, CHING YANG, Ph.D. "EDGE DETECTION USING TEMPLATE MATCHING (IMAGE PROCESSING, THRESHOLD LOGIC, ANALYSIS, FILTERS)". Duke University, 1985, 288 pages; AAT 8523046
- ^ Olson Clark. "Computer Vision". Lectures. University of Washington, Bothell, 2008
- ^ Gonzalez, R, Woods, R, Eddins, S "Digital Image Processing using Matlab" Prentice Hall, 2004
- ^ E. H. Adelson, C. H. Anderson, J. R. Bergen, P. J. Burt and J. M. Ogden, Pyramid methods in image processing http://web.mit.edu/persci/people/adelson/pub_pdfs/RCA84.pdf
- ^ Yuan, Po, M.S.E.E. "Translation, scale, rotation and threshold invariant pattern recognition system". The University of Texas at Dallas, 1993, 62 pages; AAT EP13780
- ^ H. Y. Kim and S. A. Araújo, "Grayscale Template-Matching Invariant to Rotation, Scale, Translation, Brightness and Contrast," IEEE Pacific-Rim Symposium on Image and Video Technology, Lecture Notes in Computer Science, vol. 4872, pp. 100-113, 2007.
- ^ Ashley Aberneithy. "Automatic Detection of Calcified Nodules of Patients with Tuberculous". University College London, 2007