Epipolar geometry
From Wikipedia, the free encyclopedia
Epipolar geometry refers to the geometry of stereo vision. When two cameras view a 3D scene from two distinct positions, there are a number of geometric relations between the 3D points and their projections onto the 2D images that lead to constraints between the image points. These relations are derived based on the assumption that the cameras can be approximated by the pinhole camera model.
The figure below depicts two pinhole cameras looking at point X. In real cameras, the image plane is actually behind the focal point, and produces a rotated image. Here, however, the projection problem is simplified by placing a virtual image plane in front of the focal point of each camera to produce an unrotated image. OL and OR represent the focal points of the two cameras. X represents the point of interest in both cameras. Points xL and xR are the projections of point X onto the image planes.
Each camera captures a 2D image of the 3D world. This conversion from 3D to 2D is referred to as a perspective projection and is described by the pinhole camera model. It is common to model this projection operation by rays that emanate from the camera, passing through its focal point. Note that each emanating ray corresponds to a single point in the image.
Contents |
[edit] Epipole
Since the two focal points of the cameras are distinct, each focal point projects onto a distinct point into the other camera's image plane. These two image points are denoted by EL and ER and are called epipoles. Both epipoles EL and ER and both focal points OL and OR lie on a single line.
[edit] Epipolar line
The line OL-X is seen by the left camera as a point because it is directly in line with that camera's focal point. However, the right camera sees this line as a line in its image plane. That line (ER-xR) in the right camera is called an epipolar line. Symmetrically, the line OR-X seen by the right camera as a point is seen as epipolar line EL-xLby the left camera.
[edit] Epipolar plane
As an alternative visualization, consider the points X, OL & OR that form a plane called the epipolar plane. The epipolar plane intersects each camera's image plane where it forms lines - the epipolar lines. All epipolar lines intersect the epipole regardless of where X is located.
[edit] Epipolar constraint and triangulation
If the relative translation and rotation of the two cameras is known, the corresponding epipolar geometry leads to two important observations
- If the projection point xL is known, then the epipolar line ER - xR is known and the point X projects into the right image, on a point xR which must lie on this particular epipolar line. This means that each point observed in one image the same point must be observed in the other image on a known epipolar line. This provides an epipolar constraint which corresponding image points must satisfy and it means that it is possible to test if two points really correspond to the same 3D point. Epipolar constraints can also be described by the essential matrix or the fundamental matrix between the two cameras.
- If the points xL and xR are known, their projection lines are also known. If the two image points correspond to the same 3D point X the projection lines must intersect precisely at X. This means that X can be calculated from the coordinates of the two image points, a process called triangulation.
[edit] Simplified cases
The epipolar geometry is simplified if the two camera image planes coincide. In this case, the epipolar lines also coincide (EL-xL = ER-xR). Furthermore, the epipolar lines are parallel to the line OL - OR between the focal points, and can in practice be aligned with the horizontal axes of the two images. This means that for each point in one image, its corresponding point in the other image can be found by looking only along a horizontal line. If the cameras cannot be positioned in this way, the image coordinates from the cameras may be transformed to emulate having a common image plane. This process is called image rectification.
[edit] References
- Richard Hartley and Andrew Zisserman (2003). Multiple View Geometry in computer vision. Cambridge University Press. ISBN 0-521-54051-8.
- Quang-Tuan Luong. Learning Epipolar Geometry. Retrieved on 2007-03-04.
- Robyn Owens. Epipolar geometry. Retrieved on 2007-03-04.
- Linda G. Shapiro and George C. Stockman (2001). Computer Vision. Prentice Hall, 395-403. ISBN 0-13-030796-3.