Temporal anti-aliasing
From Wikipedia, the free encyclopedia
Temporal anti-aliasing seeks to reduce or remove the effects of temporal aliasing, which results from insufficient temporal sampling. A common example of temporal aliasing in film is the appearance of vehicle wheels travelling backwards, the so-called Wagon-wheel effect.
[edit] Temporal anti-aliasing in computer generated imagery
In computer graphics and special effects, temporal anti-aliasing is usually done by analogy with motion blur on real footage shot using video or film cameras. Moving images are usually sampled (recorded) and played back at 25-60Hz. A single frame will often exhibit "streaking" because scene is not usually static for the duration of the exposure. Shorter exposure times give less streaking. In effect, the exposure time of the device allows it to take many more samples and combine these optically to reduce the temporal resolution and hence reduce the information storage requirements.
A naive approach to generating computer graphics and effects samples a perfect "snapshot" at a moment in time. This is fast and often "good enough" for many applications. However, when attempting to achieve realistic effects which are to be integrated with real footage, temporal anti-aliasing is an important element.
The approach is similar to spatial anti-aliasing. The generated scene is oversampled and then downsampled: that is, it is sampled at some multiple of the desired frame rate (8-16 times is usually effective) and the multiple samples for each frame are combined into a single frame. The method for combination may seek to replicate the optical effects of a particular medium, e.g. film, but often a simple average of the samples is sufficient. The oversampling might not be uniform over the period of the frame. In the same way that the exposure of each frame of film might be half of the period of the frame (e.g. 0.02s for 24Hz footage, which has approx 0.04s between each frame), the samples taken for each frame of a generated scene might be only from a fraction of the time after (and sometimes before) each frame sample point. The sampling may be adjusted to match the characteristics of real footage with which the generated scene is to be combined.