Camera resectioning
From Wikipedia, the free encyclopedia
The introduction to this article provides insufficient context for those unfamiliar with the subject. Please help improve the article with a good introductory style. |
Camera resectioning (often called camera calibration) is often used as an early stage in Computer Vision and especially in the field of Augmented reality.
When a camera is used, light from the environment is focused on an image plane and captured. This process reduces the dimensions of the data taken in by the camera from three to two (light from a 3D scene is stored on a 2D image). Each pixel on the image plane therefore corresponds to a shaft of light from the original scene. Camera resectioning determines which incoming light is associated with each pixel on the resulting image. In an ideal pinhole camera, a simple projection matrix is enough to do this. With more complex camera systems, errors resulting from misaligned lenses and deformations in their structures can result in more complex distortions in the final image. See Hartley & Zisserman Chapter 7.
The camera projection matrix is derived from the intrinsic and extrinsic parameters of the camera, and is often represented by the series of transformations; e.g. a matrix of camera intrinsic parameters, a 3x3 rotation matrix, and a translation vector. The camera projection matrix can be used to associate points in a camera's image space with locations in 3D world space.
Camera resectioning is often used in the application of stereo vision where the camera projection matrices of two cameras are used to calculate the 3D world coordinates of a point viewed by both cameras.
Some people call this camera calibration, but many restrict the term camera calibration for the estimation of internal or intrinsic parameters only.
[edit] Algorithms
There are many different approaches to calculate the intrinsic and extrinisic parameters for a specific camera setup. A classical approach is Roger Y. Tsai's Algorithm. It is a 2-stage algorithm, calculating the pose (3D Orientation, and x-axis and y-axis translation) in first stage. In second stage it computes the focal length, distortion coefficients and the z-axis translation.
[edit] See also
[edit] External links
- Camera Calibration - Augmented reality lecture at TU Muenchen, Germany
- Tsai's Approach
- Camera calibration (using ARToolKit)
- A Four-step Camera Calibration Procedure with Implicit Image Correction