3Dc

From Wikipedia, the free encyclopedia

3Dc is a lossy data compression algorithm for normal maps invented and first implemented by ATI. It builds upon the earlier DXT5 algorithm and is an open standard. 3Dc is now implemented by both ATI and NVIDIA.

[edit] Target Application

The target application, normal mapping, is an extension of bump mapping that simulates lighting on geometric surfaces by reading surface normals from a rectilinear grid analogous to a texture map - giving simple models the impression of increased complexity. This additional channel however increases the load on the graphics system's memory bandwidth. Pre-existing lossy compression algorithms implemented on consumer 3d hardware lacked the precision necessary for reproducing normal maps without excessive visible artefacts, justifying the development of 3Dc.

Although 3Dc was formally introduced with the ATI x800 series cards, there is also an S3TC compatible version planned for the older R3xx series, and cards from other companies. The quality and compression will not be as good, but the visual errors will still be significantly less than offered by standard S3TC.

[edit] Algorithm

Surface normals are three dimensional vectors of unit length. Because of the length constraint only two elements of any normal need to be stored. The input is therefore an array of two dimensional values.

Compression is performed in 4x4 blocks. In each block the two components of each value are compressed separately. This produces two sets of 16 numbers for compression.

The compression is achieved by finding the lowest and highest values of the 16 to be compressed and storing each of those as an 8-bit quantity. Individual elements within the 4x4 block are then stored with 3-bits each, representing their position on an 8 step linear scale from the lowest value to the highest.

Total storage is 128 bits per 4x4 block once both source components are factored in. In an uncompressed scheme with similar 8-bit precision, the source data is 32 8-bit values for the same area, occupying 256 bits. The algorithm therefore produces a 2:1 compression ratio.

The compression ratio is sometimes stated as being "up to 4:1" as it is common to use 16-bit precision for input data rather than 8-bit. This produces compressed output that is literally 1/4 the size of the input but it is not of comparable precision.

[edit] References

Languages