Pixel pipeline
From Wikipedia, the free encyclopedia
It has been suggested that this article or section be merged into graphics pipeline. (Discuss) |
The pixel pipeline was a component within 3D accelerators, most prominently prior to DirectX 9. The term encompasses one of a number of parallel processing pipelines within a graphics processing unit (GPU). Each pipeline processes pixel, texture, and frequently geometric data. Various GPUs had differing numbers of pixel pipelines, and larger numbers of these pipelines increased the pixel/texel per clock performance of the accelerator. This performance was measured in pixel and texture fill-rate. Real-time 3D rendering performance scales well with additional parallelism because of the nature of 3D graphics functions.
[edit] History
The earliest consumer-level 3D cards, such as the 3dfx Voodoo Graphics and NVIDIA RIVA 128, had an easily identifiable pipeline because of the relative simplicity of the overall 3D architecture at the time. These processors had limited programmability and little hardware geometry processing, meaning that the majority of the processor was devoted to working on pixel and texture data. Both of the aforementioned processors can be referred to as single pipeline designs. Later, the 3dfx Voodoo2 consisted of a single pixel pipeline but with two texture processors (TMU) (1×2) while the NVIDIA RIVA TNT consisted of two pixel pipelines, each with a single texture unit (2×1).
During the DirectX 7 era, graphics processors with hardware transform and lighting arrived. These processors featured limited pixel pipeline programmability, but not nearly on the same level as processors designed for DirectX 8 or later. Pixel pipelines were still readily distinguishable and easily used as quantitative measurements of a GPU's processing capabilities. For example, the NVIDIA GeForce 256 was a 4 (pixel pipeline) × 1 (TMU) pipeline processor, while the ATI Radeon was a 2 × 3 design.
The pixel pipeline definition became increasingly ambiguous as 3D accelerators evolved. With the arrival of extensively programmable pipelines during the DirectX 8 days, in the form of dedicated pixel and vertex processors, the ability to accurately identify separate "pixel pipelines" became questionable. These graphics processors had varying numbers of vertex shaders, for example, which were quite separate from the pixel processing capabilities. Still, in the realm of pure pixel and texture processing, the NVIDIA GeForce 3 contained a 4 × 2 pixel pipeline design, as with the ATI Radeon 8500.
With the arrival of DirectX 9, the pixel pipeline definition became all but useless, however. This was because the amount of processing entirely separate from pixel/texture computation grew considerably. GeForce FX, for example, operated in ways that almost entirely negated a simple "pixel pipeline" designation. Additionally, the ratio of pixel sampling to pixel shader computational power changed dramatically on some processors, such as the Radeon X1900. Some of the multi-texturing workload was processed with pixel shaders, using a process called "loopback", instead of purely within a TMU. The number of render output units could also vary greatly from the number of texture units. In general, the programmable arithmetic capabilities of the GPU became just as important as the ability to draw "simple" pixels and textures. Basic designs of the GPUs from each manufacturer varied wildly as well, resulting in it being impossible to accurately compare GPUs by a quantifiable "pixel pipeline" count.
[edit] See also
- Graphics pipeline
- Comparison of ATI Graphics Processing Units
- Comparison of NVIDIA Graphics Processing Units
[edit] External links
- "TechEncyclopedia (Graphics Pipeline)"
- "ExtremeTech 3D Pipeline Tutorial" by Dave Salvator, Extremetech.Com, July 13, 2001