User:KYN/Camera model(version 2)
From Wikipedia, the free encyclopedia
A camera can be seen as a measurement device on the light field. However, most cameras cannot measure the full light field, the amount of light received from all directions at all points in space, and instead only a specific subset of the light field is measured, typically by means of a 2-dimensional array of sensor elements. A camera model describes the mapping from the 5-dimensional light field to the resulting 2-dimensional image.
Generalizations of this camera model includes taking into account temporal variations of the light field (to produce video), different wavelength bands of the light (to produce color images), or even the polarization of the light. Furthermore, the sensor elements can be arranged in a 1-dimensional array (a linear camera) or in a more general way than the standard 2-dimensions which are described here, for example see the plenoptic camera. In the case that the sensor elements are arranged in a 2-dimensional array, it may be represented in a non-Cartesian grid, for example, in a log-polar coordinate system or on the surface of a sphere. Other examples of more general camera models include cameras which measure the light field in a non-linear way, for example to produce image values which are proportional to the logarithm of the light field.
A camera model is invariant to rigid transformations of the camera device. This means that even if the camera is translated or rotated, and therefore samples the light field in a different way, the transformed camera is still described by the same camera model. In order to deal with this invariance, the 3D space is often represented in a camera centered or camera-centric coordinate system which is "attached" to the camera. As a consequence, the camera is fix relative to the coordinate system and real transformations of the camera relative the 3D world correspond to the inverse transformations of the world relative the camera. The camera model is also invariant to transformations of the image coordinate system, that is, in which way the sensor elements are indexed or addressed.
Sometimes a camera model makes no explicit assumptions about the extension of the sensor element array and can, for example, describe it as a plane. In practice, however, various restrictions, such as the sensor element array being of finite extension, have to be included into the final model.
In any practical case, a camera model has to assume that the sensor element array has finite extension, even though this usually is implicit and it sometimes is described, for example, as an unbounded plane.
Contents |
[edit] Introduction
Under the assumption that the sensor elements are arranged in a 2-dimensional array, where each element is adressed by a Cartesian coordinate (x1,x2), the camera model implies that each such element has an individual sensitivity function S relative to the light field. The sensitivity function in combination with the light field produces a measured image intensity I at image point (x1,x2) according to
where is the 5-dimensional variable of the light field (3 position dimensions + 2 direction dimensions).
In short, a camera model describes the principal character of the sensitivity function S; how it depends on the image position (x1,x2), but invariant to rigid transformations of the camera device and to transformations of the (x1,x2) coordinate system.
In its simplest form a camera model assumes that each image point (x1,x2) is illuminated by a single ray which emanates from somewhere in the scene. This means that for a fix image point, is an impulse function which is non-zero only for a specific corresponding to a specific point in space and a specific direction to that point. This type of camera can be accomplished by assuming that all light rays which are detected by the camera must pass through a single point, for example, see the pinhole camera.
A more complex camera model take into account that each sensor element is made up of a light sensitive area, in principle of arbitrary shape (e.g., hexagonal, circle, sphere segment) but usually modelled as a rectangle. This is a realistic model for a standard digital camera where each sensor element consists of a light sensitive area which should be as large as possible to give the camera as hight light sensitivity as possible.
Another degree of complexity arises from allowing each point related to a sensor element to be illuminated by rays coming from more than one direction. Starting with a pinhole camera, where all light rays which enters the camera must pass through a "point size" hole (the aperture) the light sensitivity of the camera can be increased by allowing the aperture to have a smaller or larger finite size. As a consequence, each point of a sensor element is illuminated by all light rays which pass the aperture, the larger the aperture is the more directions will illuminate each sensor point, causing a blurred image in general. To a certain degree , this effect can be compensated for by placing one or several lenses in front of the aperture to focus the light.
[edit] Different types of camera models
There are different types of camera models which have been developed for various purposes. The type of models here are described in terms of the simpler model above, where each sensor element is represented by a single point in space that is illuminated by a single light ray, but generalizations can be made in a straightforward way.
[edit] The pinhole camera
The pinhole camera model is characterized by an infinitely small aperture (the pinhole) which all light rays which illuminate the sensor elements must pass and that all sensor elements are located on a plane (the image plane).