Computer graphics

From Wikipedia, the free encyclopedia

This article is about computer graphics in general. For the ACM SIGGRAPH journal, see Computer Graphics (Publication).
Enlarge

Computer graphics (CG) is the field of visual computing, where one utilizes computers both to generate visual images synthetically and to integrate or alter visual and spatial information sampled from the real world.

William Fetter was credited with coining the term Computer Graphics in 1960, to describe his work at Boeing. The first major advance in computer graphics was the development of Sketchpad in 1962 by Ivan Sutherland.

This field can be divided into several areas: real-time 3D rendering (often used in video games), computer animation, video capture and video creation rendering, special effects editing (often used for movies and television), image editing, and modeling (often used for engineering and medical purposes). Development in computer graphics was first fueled by academic interests and government sponsorship. However, as real-world applications of computer graphics in broadcast television and movies proved a viable alternative to more traditional special effects and animation techniques, commercial parties have increasingly funded advances in the field.

It is often thought that the first feature film to use computer graphics was 2001: A Space Odyssey (1968), which attempted to show how computers would be much more graphical in the future. However, all the "computer graphic" effects in that film were hand-drawn animation, and the special effects sequences were produced entirely with conventional optical and model effects.

Perhaps the first use of computer graphics specifically to illustrate computer graphics was in Futureworld (1976), which included an animation of a human face and hand — produced by Ed Catmull and Fred Parke at the University of Utah.

Contents

[edit] 3D

Main article: 3D computer graphics

With the birth of workstation computers (like LISP machines, paintbox computers and Silicon Graphics workstations) came 3D computer graphics, based on vector graphics. Instead of the computer storing information about points, lines, and curves on a 2-dimensional plane, the computer stores the location of points, lines, and, typically, faces (to construct a polygon) in 3-dimensional space.

3-dimensional polygons are the lifeblood of virtually all 3D computer graphics. As a result, most 3D graphics engines are based around storing points (single 3-dimensional coordinates), lines that connect those points together, faces defined by the lines, and then a sequence of faces to create 3D polygons.

Modern-day computer graphics software goes far beyond just the simple storage of polygons in computer memory. Today's graphics are not only the product of massive collections of polygons into recognizable shapes, but they also result from techniques in shading, texturing, and rasterization.

[edit] Shading

Shading in hand-drawn graphics can be done in several ways; for example, taking a pencil, flipping it to the side, and stroking it over the paper while applying light pressure.

In the context of 3D computer graphics, the process of shading involves the computer simulating (or, more accurately, calculating) how the faces of a polygon will look when illuminated by a virtual light source. The exact calculation varies depending not only on what data is available about the face being shaded, but also on the shading technique.

[edit] Image-Based Rendering

Computer graphics is all about obtaining 2D images from 3D models. In order to get highly accurate and photo-realistic images, the input 3D models should be very accurate in terms of geometry and colors. Simulating the real 3D world scene using Computer Graphics is difficult, because obtaining accurate 3D geometry of the world is difficult. Instead of obtaining 3D models, image-based rendering (IBR) uses the images taken from particular view points and tries to obtain new images from other view points. Though the term "image-based rendering" was coined recently, it has been in practice since the inception of research in computer vision. In 1996, two image-based rendering techniques were presented in SIGGRAPH: light field rendering and Lumigraph rendering. These techniques received special attention in the research community. Since then, many representations for IBR were proposed. One popular method is view-dependent texture mapping, an IBR technique from University of Southern California. Andrew Zisserman, et. al from Oxford University used machine learning concepts for IBR.

  • Flat shading: A technique that shades each polygon of an object based on the polygon's "normal" and the position and intensity of a light source.
  • Gouraud shading: Invented by Henri Gouraud in 1971, a fast and resource-conscious technique used to simulate smoothly shaded surfaces by interpolating vertex colors across a polygon's surface.
  • Texture mapping: A technique for simulating surface detail by mapping images (textures) onto polygons.
  • Phong shading: Invented by Bui Tuong Phong, a smooth shading technique that approximates curved-surface lighting by interpolating the vertex normals of a polygon across the surface; the lighting model includes glossy reflection with a controllable level of gloss.
  • Bump mapping: Invented by Jim Blinn, a normal-perturbation technique used to simulate bumpy or wrinkled surfaces.
  • Normal mapping: Related to bump mapping, a more in-depth way of simulating bumps, wrinkles, or other intricate details into low-polygon models.
  • Ray tracing: A rendering method based on the physical principles of geometric optics that can simulate multiple reflections and transparency.
  • Radiosity: a technique for global illumination that uses radiative transfer theory to simulate indirect (reflected) illumination in scenes with diffuse surfaces.
  • Blobs: a technique for representing surfaces without specifying a hard boundary representation, usually implemented as a procedural surface like a Van der Waals equipotential (in chemistry).

[edit] Texturing

Polygon surfaces (the sequence of faces) can contain data corresponding to not only a color but, in more advanced software, can be a virtual canvas for a picture, or other rasterized image. Such an image is placed onto a face or NURBs "patch" using texture space coordinates or UV's, or series of faces, and is called a texture.

Textures add a new degree of customization as to how faces and polygons will ultimately look after being shaded, depending on the shading method, and how the image is interpreted during shading.

One method of combining textures is called Texture Splatting.

[edit] See also

Wikimedia Commons has media related to:

Several important topics in 2D and 3D graphics include:

[edit] Toolkits & APIs

For an application relying heavily on computer graphics, the following could be useful:

[edit] Miscellaneous

[edit] External links