Talk:3D computer graphics/Temp

From Wikipedia, the free encyclopedia

A rewrite of 3D computer graphics is taking place here. Please feel free to contribute; keep in mind some of the existing problems that have been brought up on the talk page.

Article revision follows: by 0waldo hack it up as needed...


== 3D theory ==crikey The articles about 3D rendered computer graphics should go into more depth into 3D theory, even if this is included in another article. This includes information such as how software rendering engine renders primatives onto a 2D surface, and the basics depth perception.


3D computer graphics is a field of study within computer graphics in which numerical models of objects are stored and manipulated in three dimensions by a computer. It is most commonly used in tandem with computer animation to generate visual images for film and television productions, computer and video games, commercial illustration, engineering, and scientific visualization. The term may also refer to images produced using such a model.


Contents

[edit] Theory

This section is for the mathematical concepts behind 3D graphics – flamurai (t)

At its core, 3D computer graphics is the science, study, and method of projecting a mathematical representation of 3D objects onto a 2D image using visual tricks such as perspective and shading to simulate the eye's perception of those objects. Every 3D system must provide two components: a way to describe the 3D world or scene, which is composed of mathematical representations of 3D objects called models, and a mechanism for producing a 2D image from that scene called a renderer.

[edit] Surface Geometry

The data describing the shapes and surfaces to be rendered is commonly referred to as "surface geometry," or "geometry," and is stored in a manner that the renderer can understand. The vast majority of surface geometry currently used in 3D computer graphics is described as NURBS patches, subdivision surfaces, or polygons.

NURBS patches and subdivision surfaces are both descriptions of perfectly smooth surfaces given a set of control points and other related data (edge sharpness, etc.). NURBS patches are the extension of splines into 3 dimensions and subdivision surfaces are an extension of patches that support arbitrary mesh topology (as opposed to NURBS, which must essentially be curved quadrilaterals, as well as creased edges in some applications. These methods usually take longer to process and render, and are therefore typically used when real-time render speed is not an issue, such as still images, or animations for movies, television, prerendered cutscenes in videogames.

Polygonal models describe an object as a collection of flat polygons (usually triangles). The resulting object has no curved surfaces or rounded corners (although using a very large number of polygons can simulate curved surfaces), but are faster to render, and are supported by 3D accelerator cards, and so are primarily used for real-time applications, such as video games, or the 3D modeling software itself. Since no curvature is retained, the surface becomes faceted rather than curved, but rendering techniques have been developed to compensate for this loss of data.

An older method is called constructive solid geometry (CSG), which is the process of combining basic geometric primitives, such as spheres, cones, cylinders, planes, and other easily mathematically-defined shapes, using boolean operations (union, difference, intersection). For example, a tube can be created by taking the difference of a thin cylinder from a larger cylinder. Some applications such as computer-aided design programs which work in conjunction with a CNC milling machine may still describe objects roughly in this manner.

Other, more esoteric methods for describing objects are also used, depending on the application. For example, implicit surfaces may be used to describe smooth surfaces for which there is voxel imformation (e.g. crude descriptions of simulated water), and other methods may be used to describe things without a tangible surface, such as smoke, fire, or clouds.

[edit] Scene creation

A scene is a collection of primitives – 3D models that cannot be decomposed any further. The simplest way to arrange a scene is to use an array of primitives. However, this method retains no higher level description of the scene; it simply tells the renderer how to draw it. A more advanced technique is to arrange the objects in a tree data structure called a scene graph. This allows objects to be grouped together logically. For example, one could model a complex object out of NURBS patches, group the objects together, then use the group of objects at multiple places in a scene.

How about this: "For example, a car can be modeled out of a NURBS surface for the body and four flat cylinders for wheels, all grouped into a car object. This object can the be used in multiple instances in the scene to fill a parking lot." Arru 10:58, 18 November 2005 (UTC)

In general, primitives are specified in their own local coordinate space. To move them to the proper position in the scene, a series of transformations are applied. Affine transformations, such as scale, rotation, and position, can be specified using a 4 × 4 matrix. The transformation is applied by multiplying the matrix with a 4D vector. The fourth dimension is called the homogeneous coordinate.

Each node in a scene graph has an associated transformation. The parent node's transformation applies to the child node. This models the physical connection of objects, for example, the connection between a human and clothing. Even in modeling and rendering systems that do not support scene graphs, there is typically some concept of transformations applied step-by-step, such as a transformation stack.

[edit] Rendering

Main article: Rendering (computer graphics)

Rendering is the process of taking the mathematical model of the world – the scene – and producing the output image. There are a variety of rendering algorithms available, but at their core, they involve projecting the 3D models onto a 2D image plane.

There are two classes of rendering algorithms: scanline renderers and ray tracers. Scanline renderers operate on an object-by-object basis, directly drawing each polygon or micropolygon to the screen. Thus, they require all objects – including those modeled with continuous curvature – to be tesselated into polygons. Ray tracers operate on a pixel-by-pixel basis, casting a theoretical ray from the eye point into the scene and determining the pixel color from intersections with objects.

One of the primary responsibilities of a renderer is hidden surface determination. Ray tracing implicitly performs this task by computing the pixel color from the first intersection with an object. Since scanline renderers render on a polygon-by-polygon basis, more sophisticated techniques must be used to determine which object is closest to the theoretical eye. The simplest way to do this is to draw the objects that are farthest from the eye first so the closer objects are drawn over top of them. However this technique, called the painter's algorithm, does not yield the proper result when two polygons cross over each other. Z-buffering was developed to overcome this inadequecy. This technique employs an extra buffer to store the depth of the rendered value at each pixel. If the depth of the polygon that is currently being rendered at that pixel is less than the z-buffer value at that pixel, the output pixel is overwritten. If not, nothing is drawn.

A sharp, perfect image rendered with infinite depth of field is not actually photorealistic. The human eye is used to imperfections such as lens flare, depth-of-field limitations, and motion blur manifesting themselves in photographs and films. Reyes rendering, a scanline algorithm, was developed by Pixar specifically to address these needs along with the need to render smooth curved surfaces.

[edit] Lighting and shading

Shading is the process of determining the color of a given pixel in the image. It typically involves lighting, which models the interaction between objects and light sources. The key components needed for a lighting model are the properties of the light, surface's reflectivity properties, and the surface normal at the point where the lighting equation is to be evaluated.

To produce an adequate visual representation of an object, the physics of light must be simulated. The purest mathematical model of the flow of light is the rendering equation, which is based on the law of conservation of energy. It is an integral equation that calculates the light at a certain position as the light emitted at that position plus the integral of the reflected light from all objects in the scene that hits that position. This infinite equation cannot be evaluated by finite algorithms, so approximations must be made.

The simplest illumination models consider only light that travels from a source to an object in a straight line. This is called direct illumination. The way light reflects off the object can be modeled by a mathematical function called the bidirectional reflectance distribution function (BRDF). The BRDF models how light reflects off a given material. Most rendering systems simplify this even further and calculate direct illumination as the sum of two components: diffuse and specular. The diffuse, or Lambertian, component models light that is scattered in all directions when it hits the object, while the specular component models light that reflects cleanly off the surface like a mirror. The Phong reflection model adds a third component, ambient, which provides a very basic simulation of indirect light.

In reality, objects are lit by a number of indirect sources. Light bounces around many times until it loses its energy. Global illumination attempts to model this light. Like direct illumination, global illumination can be divided into specular and diffuse components. Diffuse interreflection refers to light that already bounced off one object hitting another object. Since the first object absorbed light of a specific wavelength, the light hitting the second object is a different color than the light that hit the first. Specular interreflection typically manifests itself as caustics, which is when light is focused by a specular surface onto a specific point, for example, the condensed dot of light from the sun shining through a magnifying glass.

Since full global illumination algorithms, such as radiosity and photon mapping, are computationally expensive, techniques have been developed to approximate global lighting. Ambient occlusion does so by determining how much ambient light a point would likely receive.

Polygon models used in real-time systems generally must have a low level of detail. The simplest and cheapest way to light them is to calculate one light intensity value for each polygon based on its normal. This is called flat shading because each polygon retains its flat appearance. In order to eliminate the obvious faceting, the light values at the vertices must somehow be interpolated. In Gouraud shading, light intensity is calculated for each vertex in the mesh based on per-vertex normals, then the values are linearly interpolated across the surface of the polygon. The major drawback of Gouraud interpolation is that it misses specular highlights in the interior polygons. To solve this, instead of precalculating illumination at each vertex, normals are interpolated across the surface of the polygon in Phong interpolation. The lighting equation is then evaluated at each pixel using the interpolated normals.

These lighting equations only yield solid-colored objects, however. Modeling every surface detail as a different object would be time consuming and expensive. Texture mapping is a technique that can be used to add surface color detail without increasing the complexity of the scene. An image is mapped to the surface of a model. Color values are then looked up in the texture when the lighting equation is evaluated. In essence, texture mapping is like wrapping a decal around an object.

Texture maps can only do so much, however, as they have no information from the lights in the scene. Bump maps are texture maps that are used to modify the surface normal rather than the color value. A bump map is an image with one value. This value is used to perturb the surface normal at that point. This perturbed surface normal is used in the lighting calculation. Bump maps add detail, such as wrinkles, to models without increasing geometric complexity.

Normal mapping is an application of bump maps that replaces rather than perturbs the surface normal. A normal map is a 3-channel image where each pixel represents a 3D normal vector.

The goal of any shading algorithm is to determine the output color at a specific point on an object's surface. Programmable shaders allow complete flexibility in this process. Such shaders are programmed with a specific programming language called a shading language. These languages are developed for computer graphics–specific applications, and thus include linear algebra and lighting features. Shaders can involve any and all of lighting, texture mapping, and geometry manipulation. A procedural shader is a shader that determines the output color completely algorithmically. Such shaders are extremely robust and can produce convincing visual effects without the need for large textures. Procedural shaders can be driven with data from image maps.

A special class of shaders is vertex and pixel shaders. These shaders are specifically designed to work with a scanline renderer of polygon models and run on a special graphics processor called the GPU. In the past, graphics hardware implemented a specific fixed-function pipeline, so anyone using it was forced to use whatever lighting model was programmed into the hardware. With vertex and pixel shaders, each step of the rendering process can be controlled.

[edit] Creating 3D graphics

This section is about PRACTICAL concerns only... the above section should be about the math – flamurai (t)

3D computer graphics are created with the aid of specialized computer software. Several basic steps are required:

  • Modeling: Objects, represented by three-dimensional data, are created.
  • Rendering: The computer produces a visual representation of that data.

In addition, depending on the application, other steps may be necessary as well:

  • Texturing and shading: Material properties describing how surfaces reacts to light are defined.
  • Scene layout: The camera, objects, and lights are placed in relation to each other.
  • Animation: Objects and characters are animated.
  • Simulation: Some things which are impractical to model or animate by hand may be simulated, such as smoke, water, or sparks.
  • Postprocessing/compositing: 2D effects such as color correction, noise reduction, blooming, or layering may be applied to the images that rendering produces.

[edit] Modeling

The internal representation of the data in the computer is commonly referred to as a model; it may be a model of real-world objects or phenomena, such as a weather system, building, or automobile, or an imagined object such as a flying saucer or dinosaur.

Wha..? Did some creationist sneak in that last bit?203.184.25.142 06:00, 16 June 2006 (UTC)

Though it is possible to specify 3D models by writing the equations that describe them, this is time consuming and unintuitive. In practice, 3D modeling programs are used.

[edit] Rendering

Generally, rendering systems can be divided into two categories: those used to render production images, and real-time rendering systems. In the former, image quality is far more important than time, and in the latter, quality will be sacrificed to maintain a consistently high framerate.

[edit] Uses

[edit] Television, film, commercials, and illustrations

[edit] Interactive media

[edit] Scientific visualization

[edit] Product design and engineering

3D graphics software is used by designers and engineers throughout the design, development and manufacture of consumer and industrial products. The main 3D tool used is computer-aided design (CAD) but other Product Lifecycle Management (PLM) technologies such as Computer-aided manufacturing (CAM), computer aided engineering analysis (CAE), Collaborative Product Development (CPD), Manufacturing Process Management (MPM) and Product visualization also extensively use 3D graphics technology.



Some older discussion of the proposed article structure

Planned Structure

  • What is 3D Computer Graphics ?
    1. What is 3D ? show picture of 2d surface and in contrast a 3d object
    2. Now, what is graphics ? some digital painting and a digital rendering
    3. So, 3D computer graphics ? 2 thumbnails; a pic from a 3D gallery and a still from a scene in a movie
  • Where is it used ?
    1. Scientific Visualizations 3d medical imaging, weather
    2. Movies/Television say, LOTR 1, Shrek and one inconspicous use; ads
    3. Games any of the 1000+ Doom-style from past 5 years
    4. Training Military and commercial simulation for training
    5. Architecture/Plant Design CAD Models for Presentations, Construction Guidence, and Operation Scenarios
  • Concepts in creation software-independent, specific technique independent description
  • Process in creation
    1. Modelling 2-3 screenshots
    2. Layout and camera first, some random arrangement and then a setup with 2 different angles and perhaps different "lenses"
    3. Shading and Lighting wireframe, flat, gouraud and final render side-by-side
    4. Animating first, two wireframes at two keyframes. Then, 6 pics including initial 2 + 4 interpolations
    5. Rendering wireframe and rendered side by side
    6. Compositing something like Gollum or pics of chroma-keying
  • Tools employed major and minor softwares + basic description of hardware used
  • Resources on the net
    1. Free demo program downloads XSI Experience, gmax ..etc
    2. Tutorials
    3. CG news sites/blogs CGChannel, Renderosity ...etc
  • Participation on the net
    1. Popular webrings/art galleries and communities RAPH.com ..etc
    2. Discussion forums CGTalk.com, Digital Sculpting ..etc

Google Search: 3d quickbasic
Google Search: "blue moon rendering toolkit"
Google Search: "3d engines"


Discussion of shadow algorithms including Shadow Volumes and Shadow Maps.


Perhaps this page should be just a short summary, with the details moved to separate pages such as Solid modelling, Computer animation, 3D computer animation, 3D graphics editors, 3D rendering, etc. Each of these topics can fill several books.. Jorge Stolfi 23:58, 7 Mar 2004 (UTC)

And so can almost all topics at Wikipedia Gyan
And so is done quite often. Mikkalai 16:11, 21 Mar 2004 (UTC)
But I would leave the finer details in those topics, to their individual pages. In the outline above, the subtopics aren't meant to be comprehensively covered. Just an intro, along with illustrations. Leaving them out altogether would be a disservice. -- Gyan 17:39, 21 Mar 2004 (UTC)-
I agree. An overview article here, with sections dealing with each sub-topic at a superficial/introductory level, each of which the points to a detailed article on that sub-topic, is the usual way of solving these problems. -- Anon.


I second Jorge's proposal and also suggest a history secion. Mikkalai 16:11, 21 Mar 2004 (UTC)

I'm gonna start on some of this tomorrow using Blender_(program). enigmasoldier 12:29, 27 Mar 2004 (UTC)

[edit] Planned Structure

I suggest merging the categories "Games" and "Training" (under "Where is it used?") into a category named "Interactive Media". James C. 03:00, 2004 Jul 24 (UTC)



Some specific addressing of common software -- if only a link to a List of 3D Modeling and Rendering Software sort of thing, should exist. Reason being, it's absurd to go to an article on modelling and not have some way to get to the articles on software that employ it.

-- Dodger

---

There should be some discussion of the history of 3d computer graphics, and the impact of 3d graphics upon computer games, especially the boom of 3d with the invention of the third gen consoles, Playstation, dreamcast, N64, and the invention of PC 3d graphics cards. I believe this would be of more interest to the average user than the highly technical nature of the article at present.

Molloi

---

Just a visitor, but some history would be welcome. Modern applications of 3DCG are worthy topics, but knowing how it got to this point, cross linking with IrisGL, OpenGL, SGI, Evans and Sutherland, etc would be good.

[edit] Tone Adjustment

Could we get more of an encyclopedic tone for this article? Right now it sounds more like something for Wikibooks, especially with the liberal use of second person voice. ✈ James C. 13:35, 2005 Jun 24 (UTC)

[edit] Excellent work

To flamurai and all others who have been working on this temporary article while I've been neglecting it, thank you! This is exactly the kind of structured, coherent article I always dreamed we might have one day. I felt a bit guilty for starting this and never finishing it, but you've done a much better job than I would have.

Perhaps when the "Uses" and "Creating 3D graphics" sections become more fleshed out, they could be moved to just after the introduction. I expect most readers will be more interested in the applications of 3D graphics ("Where have I seen them?" or "What good are they?") than in the mathematical gobbledygook :-) I also agree that much of this should be split into separate articles, since there's so much detailed information here. I think a WikiBooks project would be a better place to further expand on the intricacies of 3D graphics, before this article gets too unwieldy. -- Wapcaplet 29 June 2005 23:19 (UTC)

Thanks. A note to people editing this article: This is an encyclopedia! This is not a tutorial. The tone is supposed to be formal. It should not be written in the first person. It should not include any self-references. I admit what is written by operativem is more practical than what I wrote, but it is not encyclopedic. Once again, this is an encyclopedia article, not an "introduction to computer graphics".
Also, I never intended for the theory section to be first. It was just what I felt I could write the easiest. We can shuffle things around. I think it is important, however, to separate the theory from the practical for the reasons you mentioned (the theory may be too dense for casual readers). Really, my main intention in writing the theory section was to hit on the major techniques in computer graphics so we can link to their articles.
– flamurai (t) 04:41, July 25, 2005 (UTC)

I have been writing here before. As I am not an an enceclopedia-writer nor native english writer, but feel encaptured by the subject my next contibution is a very visual one. I recently made a page that gives some visual direct information. http://members.home.nl/rouweler/about3dsimple.html Feel free to use it, refer to it or link to it. Keep up the good works, process is always gradully from the touchstone, but the general idea is getting much clearer now. Marcel

[edit] Actual Software?

Shouldn't we mention the existing, avaiable software, such as Autodesk Maya, 3D Studio Max, 4D Cinema etc.. etc..? Also we could state the nature of propriety software in companies, in the computer graphics industry? Ben

You can find a list of common 3D computer graphics software on the 3D computer graphics software wikipedia page. -Rob