Reflection mapping
From Wikipedia, the free encyclopedia
In Computer Graphics, reflection mapping is an efficient method of simulating a complex mirroring surface by means of a precomputed texture image. The texture is used to store the image of the environment surrounding the rendered object. There are several ways of storing the surrounding environment; the most common ones are the Standard Environment Mapping in which a single texture contains the image of the surrounding as reflected on a mirror ball, or the Cubic Environment Mapping in which the envirornment is unfolded onto the six faces of a cube and stored therefore as six square textures.
This kind of approach is more efficient than the classical ray tracing approach of computing the exact reflection by shooting a ray and following its optically exact path, but it should be noted that these are (sometimes crude) approximation of the real reflection. A typical drawback of this technique is the absence of self reflections: you cannot see any part of the reflected object inside the reflection itself.
Contents |
[edit] Types of Reflection Mapping
[edit] Standard Environment Mapping
Standard environment mapping, more commonly known as spherical environment mapping, involves the use of a textured sphere infinitely far away from the object that reflects it. By creating a spherical texture using a fisheye lens or via prerendering or with a light probe, this texture is mapped to a hollow sphere, and the texel colors are determined by calculating the light vectors from the points on the object to the texels in the environment map. This technique is similar to raytracing, but incurs less of a performance hit because all of the colors of the points to be referenced are known beforehand by the GPU, so all it has to do is to calculate the angles of incidence and reflection.
Spherical environment mapping was first experimented with in 1982 by Gene Miller at MAGI Synthavision.[1] With the assistance of Christine Chang, he photographed a Christmas ornament in the parking lot at MAGI. By cropping the photo of the ball down to its diameter, he was able to map it to a hollow sphere (see the process above) (Fig. 1). Next, he applied it to a blobby dog model created by Ken Perlin (Fig. 2), and superimposed the environment-mapped model into a photograph of the parking lot. The result can be seen in Fig. 3. This technique of environment mapping real-world environments enventually came to use in HDRI image-based lighting (see here).
There are a few glaring limitations to spherical mapping. For one thing, due to the nature of the texture used for the map, there is an abrupt point of singularity on the backside of objects using spherical mapping. Cube mapping (see below) was developed to address this issue. Since cube maps (if made and filtered correctly) have no visible seams, they are an obvious successor to the archaic sphere maps, and nowadays spherical environment maps are almost nonexistent in certain contemporary graphical applications, such as video game graphics.
[edit] Cube Environment Mapping
Cube mapped reflection is a technique that uses cube mapping to make objects look like they reflect the environment around them. Generally, this is done with the same skybox that is used in outdoor renderings. Though this is not a true reflection since objects around the reflective one will not be seen in the reflection, the desired effect is usually achieved.
Cube mapped reflection is done by determining the vector that the object is being viewed at. This camera ray is reflected about the surface normal of where the camera vector intersects the object. This results in the reflected ray which is then passed to the cube map to get the texel which the camera then sees as if it is on the surface of the object. This creates the effect that the object is reflective.
[edit] Application in Real-Time 3D Graphics
[edit] Standard Environment Mapping
[edit] Cubic Environment Mapping
Cube mapped reflection, when used correctly, may be the fastest method of rendering a reflective surface. To increase the speed of rendering, each vertex calculates the position of the reflected ray. Then, the position is interpolated across polygons to which the vertex is attached. This eliminates the need for recalculating every pixel's reflection.