Robust real-time object detection

From Wikipedia, the free encyclopedia

The Viola and Jones Object Detection Framework is the first object detection framework to provide competitive object detection rates in real-time.[1] Although it can be trained to detect a variety of object classes, it was motivated primarily by the problem of face detection. The purpose of this article is to introduce the contributions which made this advancement possible.

Contents

[edit] Components of the Framework

Feature types used by Viola and Jones
Feature types used by Viola and Jones

[edit] Feature Types and Evaluation

The features employed by the detection framework universally involve the sums of image pixels within rectangular areas. As such, they bear some resemblance to Haar basis functions, which have been used previously in the realm of image-based object detection.[2] However, since the features used by Viola and Jones all rely on more than one rectangular area, they are generally more complex. The figure at right illustrates the three different types of features used in the framework. The value of any given feature is always simply the sum of the pixels within clear rectangles subtracted from the sum of the pixels within shaded rectangles. As is to be expected, rectangular features of this sort are rather primitive when compared to alternatives such as steerable filters. Although they are sensitive to vertical and horizontal features, their feedback is considerably coarser. However, with the use of an image representation called the integral image, rectangular features can be evaluated in constant time, which gives them a considerable speed advantage over their more sophisticated relatives.


As the name suggests, the value at any point (x,y) in the integral image is just the sum of all the pixels above and to the left of (x,y), inclusive:[3]


ii(x,y) = \sum_{x' \le x,y' \le y} i(x',y')


Moreover, the integral image can be computed efficiently in a single pass over the image, using the fact that the value in the integral image at (x,y) is just:


ii(x,y) = ii(x-1,y) + \sum_{x' \le x} i(x',y)

Finding the value of a Rectangle
Finding the value of a Rectangle


Once the integral image has been computed, the task of evaluating any rectangle can be accomplished in just four array references. Specifically, using the notation in the figure at right, the value is just:


\sum_{A(x) \le x' \le B(x), A(y) \le y' \le D(y)} i(x',y') = ii(A) + ii(C) - ii(B) - ii(D)


Because each rectangular area in a feature is always adjacent to at least one other rectangle, it follows that any two-rectangle feature can be computed in six array references, any three-rectangle feature in eight, and any four-rectangle feature in just nine.

[edit] Learning Algorithm

The speed with which features may be evaluated does not adequately compensate for their number, however. For example, in a standard 24x24 pixel sub-window, there are a total of 45,396 possible features, and it would be prohibitively expensive to evaluate them all. Thus, the object detection framework employs a variant of the learning algorithm AdaBoost to both select the best features and to train classifiers that use them.

Cascade Architecture
Cascade Architecture

[edit] Cascade Architecture

The evaluation of the strong classifiers generated by the learning process can be done quickly, but it isn’t fast enough to run in real-time. For this reason, the strong classifiers are arranged in a cascade in order of complexity, where each successive classifier is trained only on those examples which pass through the preceding classifiers. If at any point in the cascade a classifier rejects the sub-window under inspection, no further processing is performed and the search moves on to the next sub-window (see figure at right). The cascade therefore has the form of a degenerate decision tree. In the case of faces, the first classifier in the cascade - called the attentional operator - uses only two features to achieve a false negative rate of approximately 0% and a false positive rate of 40%.[4] The effect of this single classifier is to reduce by roughly half the number of times the entire cascade is evaluated.

The cascade architecture has interesting implications for the performance of the individual classifiers. Because the activation of each classifier depends entirely on the behavior of its predecessor, the false positive rate for an entire cascade is:


F = \prod_{i=1}^K f_i


Similarly, the detection rate is:


D = \prod_{i=1}^K d_i


Thus, to match the false positive rates typically achieved by other detectors, each classifier can get away with having surprisingly poor performance. For example, for a 32-stage cascade to achieve a false positive rate of 10 - 6, each classifier need only achieve a false positive rate of about 65%. At the same time, however, each classifier needs to be exceptionally capable if it is to achieve adequate detection rates. For example, to achieve a detection rate of about 90%, each classifier in the aforementioned cascade needs to achieve a detection rate of approximately 99.7%.


[edit] References

  1. ^ Viola, Jones: Robust Real-time Object Detection, IJCV 2004 See pages 1,3.
  2. ^ C. Papageorgiou, M. Oren and T. Poggio. A General Framework for Object Detection. International Conference on Computer Vision, 1998
  3. ^ Viola, Jones: Robust Real-time Object Detection, IJCV 2004 See page 5.
  4. ^ Viola, Jones: Robust Real-time Object Detection, IJCV 2004 See page 11.

[edit] External links