Video tracking
From Wikipedia, the free encyclopedia
Video tracking is the process of locating a moving object (or several ones) in time using a camera. An algorithm analyses the video frames and outputs the location of moving targets within the video frame.
The main difficulty in video tracking is to associate target locations in consecutive video frames, especially when the objects are moving fast and the frame rate is low. Here, video tracking systems usually employ a motion model which describes how the image of the target might change for different possible motions of the object to track.
Examples of simple motion models are:
- to track planar objects, the motion model is a 2D transformation (affine transformation or homography) of an image of the object (e.g. the initial frame)
- when the target is a rigid 3D object, the motion model defines its aspect depending on its 3D position and orientation
- for video compression, key frames are divided into macroblocks. The motion model is a disruption of a key frame, where each macroblock is translated by a motion vector given by the motion parameters
- the image of deformable objects can be covered with a mesh, the motion of the object is defined by the position of the nodes of the mesh.
The role of the tracking algorithm is to analyse the video frames in order to estimate the motion parameters. These parameters characterize the location of the target.
There are several approaches possible to perform the tracking, among which:
- Blob tracking: Segmentation of object interior (for example blob detection, block-based correlation or optical flow)
- Mean shift clustering or Mean shift analysis: Mean shift analysis is normally applied in the difference image and after few iterations it gives the cluster center.
- Contour tracking: Detection of object boundary (e.g. active contours or Condensation algorithm)
- Visual feature matching: Registration