Normal mapping
From Wikipedia, the free encyclopedia
In 3D computer graphics, normal mapping is an application of the technique known as bump mapping. Normal mapping is sometimes referred to as "Dot3 bump mapping". While bump mapping perturbs the existing normal (the way the surface is facing) of a model, normal mapping replaces the normal entirely. Like bump mapping, it is used to add details to shading without using more polygons. But where a bump map is usually calculated based on a single-channel (interpreted as grayscale) image, the source for the normals in normal mapping is usually a multichannel image (that is, channels for "red", "green" and "blue" as opposed to just a single color) derived from a set of more detailed versions of the objects.
Normal mapping is usually found in two varieties: object-space and tangent-space normal mapping. They differ in coordinate systems in which the normals are measured and stored.
One of the most interesting uses of this technique is to greatly enhance the appearance of a low poly model exploiting a normal map coming from a high resolution model. While this idea of taking geometric details from a high resolution model had been introduced in "Fitting Smooth Surfaces to Dense Polygon Meshes" by Krishnamurthy and Levoy, Proc. SIGGRAPH 1996, where this approach was used for creating displacement maps over nurbs, its application to more common triangle meshes came later. In 1998 two papers were presented with the idea of transferring details as normal maps from high to low poly meshes: "Appearance Preserving Simplification", by Cohen et al. SIGGRAPH 1998, and "A general method for recovering attribute values on simplified meshes" by Cignoni et al. IEEE Visualization '98. The former presented a particular constrained simplification algorithm that during the simplification process tracks how the lost details should be mapped over the simplified mesh. The latter presented a simpler approach that decouples the high and low polygonal mesh and allows the recreation of the lost details in a way that is not dependent on how the low model was created. This latter approach (with some minor variations) is still the one used by most of the currently available tools.
Contents |
[edit] How it works
To calculate the Lambertian (diffuse) lighting of a surface, the unit vector from the shading point to the light source is dotted with the unit vector normal to that surface, and the result is the intensity of the light on that surface. Many other lighting models also involve some sort of dot product with the normal vector. Imagine a polygonal model of a sphere - you can only approximate the shape of the surface. By using an RGB bitmap textured across the model, more detailed normal vector information can be encoded. Each color channel in the bitmap (red, green and blue) corresponds to a spatial dimension (X, Y and Z). These spatial dimensions are relative to a constant coordinate system for object-space normal maps, or to a smoothly varying coordinate system (based on the derivatives of position with respect to texture coordinates) in the case of tangent-space normal maps. This adds much more detail to the surface of a model, especially in conjunction with advanced lighting techniques.
In the most common implementation of normalmaps, used by Valve's Source engine and implemented in hardware in nVidia cards, the red channel should be the relief of the material when lit from the right, the green channel should be the relief of the material when lit from below, and the blue channel should be the relief of the material when lit from the front(practically, full except on the "slopes"); or, to put it another way, the XYZ coordinates of the face normals are placed in the RGB values of the normal map. If a material is classified as being reflective, the albedo is usually encoded in the alpha channel if one exists.
[edit] Normal mapping in computer entertainment
Interactive normal map rendering was originally only possible on PixelFlow, a parallel graphics machine built at the University of North Carolina at Chapel Hill. It was later possible to perform normal mapping on high-end SGI workstations using multi-pass rendering and frame buffer operations or on low end PC hardware with some tricks using paletted textures. However, with the increasing processing power and sophistication of home PCs and gaming consoles, normal mapping has spread to the public consciousness through its use in several high-profile games, including: Far Cry (Crytek), Deus Ex: Invisible War (Eidos Interactive), Thief: Deadly Shadows (Eidos Interactive), The Chronicles of Riddick: Escape from Butcher Bay (Vivendi Universal), Halo 2 (Microsoft), Doom 3 (id Software), Half-Life 2 (Valve Software), Call of Duty 2 (Activision), Tom Clancy's Splinter Cell: Chaos Theory (Ubisoft), and Dewy's Adventure (Konami). It is also used extensively in the third version of the Unreal engine (Epic Games). Normal mapping's increasing popularity amongst video-game designers is due to its combination of excellent graphical quality and decreased processing requirements versus other methods of producing similar effects. This decreased processing requirement translates into better performance and is made possible by distance-indexed detail scaling, a technique which decreases the detail of the normal map of a given texture (cf. mipmapping). Basically, this means that more distant surfaces require less complex lighting simulation. This in turn cuts the processing burden, while maintaining virtually the same level of detail as close-up textures.
Currently, normal mapping has been utilized successfully and extensively on both the PC and gaming consoles. Initially, Microsoft's Xbox was the only home game console to fully support this effect, whereas other consoles use a software-only implementation as they don't support it directly on hardware. Developers on next generation consoles such as the Xbox360 and the PlayStation 3 rely heavily on normal mapping, and are beginning to implement parallax mapping.
[edit] See also
[edit] Games that support normal mapping
- F.E.A.R.
- Half Life 2
- Halo
- Halo 2
- Lair
- Supreme Commander
- Sauerbraten
- DOOM 3
- Dewy's Adventure
- The Elder Scrolls IV: Oblivion
- Battlefield 2
- Battlefield 2142
- PREY
- Quake 4
- S.T.A.L.K.E.R.: Shadow of Chernobyl
- Crysis
- Silent Hunter 4 Wolves of the Pacific
[edit] External links
- Blender Normal Mapping
- GIMP normalmap plugin
- Photoshop normalmap plugin
- Normal Mapping tutorials for artists, Ben Cloward
- Free xNormal normal mapper tool, Santiago Orgaz
- Maya normal mapping plugin, Olivier Renouard
- MICROWAVE projective rendering and normal mapping plugin, evasion3D
- Normal Mapping with paletted textures using old OpenGL extensions.
- Normal Mapping without hardware assistance, Lux aeterna luceat eis Amiga demo from Ephidrena
- ZMapper, Pixologic
- Normal Map Photography Creating normal maps manually by layering digital photographs
[edit] Bibliography
- Fitting Smooth Surfaces to Dense Polygon Meshes, Krishnamurthy and Levoy, SIGGRAPH 1996
- (PDF) Appearance-Preserving Simplification, Cohen et. al, SIGGRAPH 1998
- (PDF) A general method for recovering attribute values on simplifed meshes, Cignoni et al, IEEE Visualization 1998
- (PDF) Realistic, Hardware-accelerated Shading and Lighting, Heidrich and Seidel, SIGGRAPH 1999