Shader

From Wikipedia, the free encyclopedia

A shader in the field of computer graphics is a set of software instructions, which is used by the graphic resources primarily to perform rendering effects. Shaders are used to allow a 3D application designer to program the GPU (Graphics Processing Unit) "programmable pipeline", which has mostly superseded the older "fixed-function pipeline", allowing more flexibility in making use of advanced GPU programmability features.

Contents

[edit] Introduction

From a technical view a "shader" is a part of the renderer, which is responsible for calculating the color of an object.

As Graphics Processing Units evolved, major graphics software libraries such as OpenGL and DirectX began to exhibit enhanced ability to program these new GPUs by defining special shading functions in their API. Changes were introduced to reflect shader capabilities in the platform independent graphics library OpenGL version 1.5, and in the proprietary DirectX-Version 8.

[edit] Types of shader

The DirectX and OpenGL graphic libraries use three types of shaders.

  • Vertex shaders are run once for each vertex given to the graphics processor. The purpose is to transform each vertex's 3D position in virtual space to the 2D coordinate at which it appears on the screen (as well as a depth value for the Z-buffer). Vertex shaders can manipulate properties such as position, color, and texture coordinate, but cannot create new vertices. The output of the vertex shader goes to the next stage in the pipeline, which is either a geometry shader if present or the rasterizer otherwise.
  • Geometry shaders can add and remove vertices from a mesh. Geometry shaders can be used to generate geometry procedurally or to add volumetric detail to existing meshes that would be too costly to process on the CPU. DirectX 10 added geometry shader support to the Direct3D API. OpenGL only supports geometry shaders through the use of an extension, though it will likely be incorporated into the standard itself with version 3.0 or 3.1. If geometry shaders are being used, the output is then sent to the rasterizer.
  • Pixel shaders, also known as fragment shaders, calculate the color of individual pixels. The input to this stage comes from the rasterizer, which fills in the polygons being sent through the graphics pipeline. Pixel shaders are typically used for scene lighting and related effects such as bump mapping and color toning. (DirectX uses the term "pixel shader," while OpenGL uses the term "fragment shader." The latter is arguably more correct, as there is not a one-to-one relationship between calls to the pixel shader and pixels on the screen. The most common reason for this is that pixel shaders are often called many times per pixel for every object that is in the corresponding space, even if it is occluded; the Z-buffer sorts this out later.)

The Unified shader model unifies the three aforementioned shaders in OpenGL and DirectX 10. See NVIDIA faqs.

As these shader types are processed within the GPU pipeline, the following gives an example how they are embedded in the pipeline:

[edit] Simplified graphic processing unit pipeline

For more details on this topic, see Graphics pipeline.
  • The CPU sends instructions (compiled shading language programs) and geometry data to the graphics processing unit, located on the graphics card.
  • Within the vertex shader, the geometry is transformed and lighting calculations are performed.
  • If a geometry shader is in the graphic processing unit, some changes of the geometries in the scene are performed.
  • The calculated geometry is triangulated (subdivided into triangles).
  • Triangles are transformed into pixel quads (one pixel quad is a 2 × 2 pixel primitive).

[edit] Parallel processing

Shaders are written to apply transformations to a large set of elements at a time, for example, to each pixel in an area of the screen, or for every vertex of a model. This is well suited to parallel processing, and most modern GPUs have a multi-core design to facilitate this, vastly improving efficiency of processing.

[edit] Programming shaders

Since the version 1.5, OpenGL has had a C-like Shader-Language available to it, called OpenGL Shading Language, or GLSL. There are also interfaces for the Cg shader language, developed by Nvidia, which is syntactically somewhat similar to GLSL.

In DirectX, shaders are programmed with High Level Shader Language, or HLSL, but the types and complexity of shader programs allowed differ depending on what version of DirectX is used.

The following table shows the relations between DirectX-Versions:

DirectX Version Pixel Shader Vertex Shader
8.0 1.0, 1.1 1.0, 1.1
8.1 1.2, 1.3, 1.4 1.0, 1.1
9.0 2.0 2.0
9.0a 2_A, 2_B 2.x
9.0c 3.0 3.0
10.0 4.0 4.0
10.1 4.1 4.1

[edit] See also

[edit] External links

[edit] Further reading

[edit] References

  1. ^  Search ARB_shader_objects for the issue "32) Can you explain how uniform loading works?". This is an example of how a complex data structure must be broken in basic data elements.
  2. ^  Required machinery has been introduced in OpenGL by ARB_multitexture but this specification is no more available since its integration in core OpenGL 1.2.
  3. ^  Search again ARB_shader_objects for the issue "25) How are samplers used to access textures?". You may also want to check out "Subsection 2.14.4 Samplers".