Photogrammetry

From Wikipedia, the free encyclopedia

Photogrammetry is the first remote sensing technology ever developed, in which geometric properties about objects are determined from photographic images. Historically, photogrammetry is as old as modern photography itself, and can be dated to mid-nineteenth century.

In the simplest example, the three-dimensional coordinates of points on an object are determined by measurements made in two or more photographic images taken from different positions (see stereoscopy). Common points are identified on each image. A line of sight (or ray) can be constructed from the camera location to the point on the object. It is the intersection of these rays (triangulation) that determines the three-dimensional location of the point. More sophisticated algorithms can exploit other information about the scene that is known a priori, for example symmetries, in some cases allowing reconstructions of 3D coordinates from only one camera position.

Photogrammetry is used in different fields, such as topographic mapping, architecture, engineering, manufacturing, quality control, police investigation, and geology, as well as by archaeologists to quickly produce plans of large or complex sites and by meteorologists as a way to determine the actual wind speed of a tornado where objective weather data cannot be obtained. It is also used to combine live action with computer generated imagery in movie post-production; Fight Club is a good example of the use of photogrammetry in film (details are given in the DVD extras).

Algorithms for photogrammetry typically express the problem as that of minimizing the sum of the squares of a set of errors. The minimization is itself often performed using the Levenberg-Marquardt algorithm (also known as bundle adjustment).

Contents

[edit] Photogrammetric methods

Wiora's data model of photogrammetry [1].
Wiora's data model of photogrammetry [1].

Photogrammetry uses methods from many disciplines including optics and projective geometry. The data model on the right shows what type of information can go into and come out of photogrammetric methods.

The 3D co-ordinates define the locations of object points in the 3D space. The image co-ordinates define the locations of the object points' images on the film or an electronic imaging device. The exterior orientation of a camera defines its location in space and its view direction. The inner orientation defines the geometric parameters of the imaging process. This is primarily the focal length of the lens, but can also include the description of lens distortions. Further additional observations play an important role: With scale bars, basically a known distance of two points in space, or known fix points, the connection to the basic measuring units is created.

Each of the four main variables can be an input or an output of a photogrammetric method.

Photogrammetry has been defined by ASPRS[[2]] as the art, science, and technology of obtaining reliable information about physical objects and the environment through processes of recoding, measuring and interpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena.

Photogrammetric Techniques Depending on the available material (metric camera or not, stereopairs, shape of recorded object, control information...) and the required results (2D or 3D, accuracy...), different photogrammetric techniques can be applied. Depending on the number of photographs, three main-categories can be distinguished.


1. Mapping from a single photograph Only useful for plane (2D) objects. Obliquely photographed plane objects show perspective deformations which have to be rectified. For rectification exists a broad range of techniques. Some of them are very simple. However, there are some limitations. To get good results even with the simple techniques, the object should be plane (as for example a wall), and since only a single photograph is used, the mappings can only be done in 2D The rectification can be neglected, only if the object is flat and the picture is made from a vertical position towards the object. In this case, the photograph will have a unique scale factor, which can be determined, if the length of at least one distance at the object is known. Very shortly, we will describe now some common techniques: • Paper strip method This is the cheapest method, since only a ruler, a piece of paper with a straight edge and a pencil are required. It was used during the last century. Four points must be identified in the picture and in a map. From one point, lines have to be drawn to the others (on the image and the map) and to the required object point (on the image). Then the paper strip is placed on the image and the intersections with the lines are marked. The strip is then placed on the map and adjusted such that the marks coincide again with the lines. After that, a line can be drawn on the map to the mark of the required object point. The whole process is repeated from another point, giving the object-point on the map as intersection of the two object-lines. • Optical rectification Is done using photographic enlargeners. These should fulfill the so called „Scheimpflug condition“ and the „vanishing-point condition“. Again, at least four control points are required, not three on one line. The control points are plotted at a certain scale. The control point plot is rotated and displaced until two points match the corresponding object points from the projected image. After that, the table has to be tilted by two rotations, until the projected negative fits to all control points. Then an exposure is made and developed. • Numerical rectification Again, the object has to be plane and four control points are required. At the numerical rectification, the image coordinates of the desired object-points are transformed into the desired coordinate system (which is again 2D). The result is the coordinates of the projected points. Differential rectification If the object is uneven, it has to be divided into smaller parts, which are plane. Each part can then be rectified with one of the techniques shown above. Of course, also even objects may be rectified piecewise, differentially. A prerequisite for differential rectification is the availability of a digital object model, i.e. a dense raster of points on the object with known distances from a reference plane; in aerial photogrammetry it is called a DTM (Digital Terrain Model). • Monoplotting This technique is similar to the numerical rectification, except that the coordinates are here transformed into a 3D coordinate system. First, the orientation elements, that are the coordinates of the projection center and the three angles defining the view of the photograph, are calculated by spatial resection. Then, using the calibration data of the camera, any ray, that came from the archaeological feature through the lense onto the photograph can be reconstructed and intersected with the digital terrain model. • Digital rectification The digital rectification is a rather new technique. It is somehow similar to „monoplotting“. But here, the scanned image is transformed pixel by pixel into the 3D real-world coordinate system. The result is an orthophoto, a rectified photograph, that has a unique scale.

2. Stereophotogrammetry As the term already implies, stereopairs are the basic requirement, here. These can be produced using stereometric cameras. If only a single camera is available, two photographs can be made from different positions, trying to match the conditions of the „normal case“. Vertical aerial photographs come mostly close to the „normal case“. They are made using special metric cameras, that are built into an aeroplane looking straight downwards. While taking the photographs, the aeroplane flies over a certain area in a meandric way, so that the whole area is covered by overlapping photographs. The overlapping part of each stereopair can be viewed in 3D and consequently mapped in 3D using one of following techniques: • Analogue The analogue method was mainly used until the 70ies of our century. Simply explained, the method tries to convert the recording procedure. Two projectors, which have the same geometric properties as the used camera (these can be set during the so called „inner orientation“), project the negatives of the stereopair. Their positions then have to be exactly rotated into the same relationship towards each other as at the moment of exposure (=„relative orientation“). After this step, the projected bundle of light rays from both photographs intersect with each other forming a (three dimensional optical) „model“. At last, the scale of this model has to be related to its true dimensions and the rotations and shifts in relation to the mapping (world) coordinate system are to be determined. Therefore, at least three control points, which are not on one straight line, are required (=„absolute orientation“). The optical model is viewed by means of a stereoscope. The intersection of rays can then be measured point by point using a measuring mark. This consists of two marks, one on each photograph. When viewing the model, the two marks fuse into a 3D one, which can be moved and raised until the desired point of the 3D object is met. The movements of the mark are mechanically transmitted to a drawing device. In that way, maps are created. • Analytical The first analytical plotters were introduced in 1957. From the 1970ies on, they became commonly available on the market. The idea is still the same as with analogue instruments. But here, a computer manages the relationship between image- and real-world coordinates. The restitution of the stereopair is done within three steps: After restoration of the "inner orientation", where the computer may now also correct for the distortion of the film, both pictures are relatively oriented. After this step, the pictures will be looked at in 3D. Then, the absolute orientation is performed, where the 3D model is transferred to the real- world coordinate system. Therefore, at least three control points are required. After the orientation, any detail can be measured out of the stereomodel in 3D. Like in the analogue instrument, the model and a corresponding measuring mark are seen in 3D. The movements of the mark are under your control. The main difference to the former analogue plotting process is that the plotter doesn't plot any more directly onto the map but onto the monitors screen or into the database of the computer. The analytical plotter uses the computer to calculate the real-world coordinates, which can be stored as an ASCII file or transferred on-line into CAD-programs. In that way, 3D drawings are created, which can be stored digitally, combined with other data and plotted later at any scale. • Digital Digital techniques have become widely available during the last decade. Here, the images are not on film but digitally stored on tape or disc. Each picture element (pixel) has its known position and measured intensity value, only one for black/white, several such values for colour or multispectral images.

3. Mapping from several photographs This kind of restitution, which can be done in 3D, has only become possible by analytical and digital photogrammetry. Since the required hard- and software is steadily getting cheaper, it's application fields grow from day to day. Here, mostly more than two photographs are used. 3D objects are photographed from several positions. These are located around the object, where any object-point should be visible on at least two, better three photographs. The photographs can be taken with different cameras (even „amateur“ cameras) and at different times (if the object does not move). • Technique As mentioned above, only analytical or digital techniques can be used. During all methods, first a bundle adjustment has to be calculated. Using control points and triangulation points the geometry of the whole block of photographs is reconstructed with high pecision. Then the image coordinates of any desired object-point measured in at least two photographs can be intersected. The result are the coordinates of the required points. In that way, the whole 3D object is digitally reconstructed.

[edit] Integration of Photogrammetric Data with LiDAR Data

Photogrammetry and LiDAR data complement each other. Photogrammetry is more accurate in the x and y direction while LiDAR is more accurate in the z direction. Photos can clearly define the edges of buildings when the LiDAR point cloud footprint can not. It is beneficial to incorporate the advantages of both systems and integrate it to create a better product.

A 3D visualization can be created by georeferencing the aerial photos and LiDAR data in the same reference frame, orthorectifying the aerial photos, and then draping the orthorectified images on top of the LiDAR grid. [[3]]


[edit] See also

[edit] External links