Ray casting

Ray casting is the use of ray-surface intersection tests to solve a variety of problems in computer graphics and computational geometry. The term was first used in computer graphics in a 1982 paper by Scott Roth to describe a method for rendering constructive solid geometry models.[1]

Ray casting can refer to a variety of problems and techniques:

Although "ray casting" and "ray tracing" were often used interchangeably in early computer graphics literature,[4] more recent usage tries to distinguish the two.[5] The distinction is that ray casting is a rendering algorithm that never recursively traces secondary rays, whereas other ray tracing-based rendering algorithms may do so.

Concept

Ray casting is the most basic of many computer graphics rendering algorithms that use the geometric algorithm of ray tracing. Ray tracing-based rendering algorithms operate in image order to render three-dimensional scenes to two-dimensional images. Geometric rays are traced from the eye of the observer to sample the light (radiance) travelling toward the observer from the ray direction. The speed and simplicity of ray casting comes from computing the color of the light without recursively tracing additional rays that sample the radiance incident on the point that the ray hit. This eliminates the possibility of accurately rendering reflections, refractions, or the natural falloff of shadows; however all of these elements can be faked to a degree, by creative use of texture maps or other methods. The high speed of calculation made ray casting a handy rendering method in early real-time 3D video games.

In nature, a light source emits a ray of light that travels, eventually, to a surface that interrupts its progress. One can think of this "ray" as a stream of photons travelling along the same path. At this point, any combination of three things might happen with this light ray: absorption, reflection, and refraction. The surface may reflect all or part of the light ray, in one or more directions. It might also absorb part of the light ray, resulting in a loss of intensity of the reflected and/or refracted light. If the surface has any transparent or translucent properties, it refracts a portion of the light beam into itself in a different direction while absorbing some (or all) of the spectrum (and possibly altering the color). Between absorption, reflection, and refraction, all of the incoming light must be accounted for, and no more. A surface cannot, for instance, reflect 66% of an incoming light ray, and refract 50%, since the two would add up to be 116%. From here, the reflected and/or refracted rays may strike other surfaces, where their absorptive, refractive, and reflective properties are again calculated based on the incoming rays. Some of these rays travel in such a way that they hit our eye, causing us to see the scene and so contribute to the final rendered image. Attempting to simulate this real-world process of tracing light rays using a computer can be considered extremely wasteful, as only a minuscule fraction of the rays in a scene would actually reach the eye.

The first ray casting algorithm used for rendering was presented by Arthur Appel in 1968.[6] The idea behind ray casting is to trace rays from the eye, one per pixel, and find the closest object blocking the path of that ray - think of an image as a screen-door, with each square in the screen being a pixel. This is then the object the eye sees through that pixel. Using the material properties and the effect of the lights in the scene, this algorithm can determine the shading of this object. The simplifying assumption is made that if a surface faces a light, the light will reach that surface and not be blocked or in shadow. The shading of the surface is computed using traditional 3D computer graphics shading models. One important advantage ray casting offered over older scanline algorithms was its ability to easily deal with non-planar surfaces and solids, such as cones and spheres. If a mathematical surface can be intersected by a ray, it can be rendered using ray casting. Elaborate objects can be created by using solid modelling techniques and easily rendered.

An early use of Appel's ray casting rendering algorithm was by Mathematical Applications Group, Inc., (MAGI) of Elmsford, New York.[7]

Ray casting in computer games

Wolfenstein 3-D

The world in Wolfenstein 3-D is built from a square based grid of uniform height walls meeting solid coloured floors and ceilings. In order to draw the world, a single ray is traced for every column of screen pixels and a vertical slice of wall texture is selected and scaled according to where in the world the ray hits a wall and how far it travels before doing so.[8]

The purpose of the grid based levels is twofold - ray to wall collisions can be found more quickly since the potential hits become more predictable and memory overhead is reduced. However, encoding wide-open areas takes extra space.

Comanche series

The Voxel Space engine developed by NovaLogic for the Comanche games traces a ray through each column of screen pixels and tests each ray against points in a heightmap. Then it transforms each element of the heightmap into a column of pixels, determines which are visible (that is, have not been occluded by pixels that have been drawn in front), and draws them with the corresponding color from the texture map.[9]

Computational geometry setting

In computational geometry, the ray casting problem is also known as the ray shooting problem and may be stated as the following query problem. Given a set of objects in d-dimensional space, preprocess them into a data structure so that for each query ray, the initial object hit by the ray can be found quickly. The problem has been investigated for various settings: space dimension, types of objects, restrictions on query rays, etc.[10] One technique is to use a sparse voxel octree.

See also

References

  1. Roth, Scott D. (February 1982), "Ray Casting for Modeling Solids", Computer Graphics and Image Processing 18 (2): 109–144, doi:10.1016/0146-664X(82)90169-1
  2. Woop, Sven; Schmittler, Jörg; Slusallek, Philipp (2005), "RPU: A Programmable Ray Processing Unit for Realtime Ray Tracing", Siggraph 2005 24 (3): 434, doi:10.1145/1073204.1073211
  3. Daniel Weiskopf (2006). GPU-Based Interactive Visualization Techniques. Springer Science & Business Media. p. 21. ISBN 978-3-540-33263-3.
  4. Foley, James D.; van Dam, Andries; Feiner, Steven K.; Hughes, John F. (1995), Computer Graphics: Principles and Practice, Addison-Wesley, p. 701, ISBN 0-201-84840-6
  5. Boulos, Solomon (2005), "ACM SIGGRAPH 2005 Courses on - SIGGRAPH '05", SIGGRAPH 2005 Courses: 10, doi:10.1145/1198555.1198749 |chapter= ignored (help)
  6. "Ray-tracing and other Rendering Approaches" (PDF), lecture notes, MSc Computer Animation and Visual Effects, Jon Macey, University of Bournemouth
  7. Goldstein, R. A., and R. Nagel. 3-D visual simulation. Simulation 16(1), pp. 25–31, 1971.
  8. Wolfenstein-style ray casting tutorial by F. Permadi
  9. Andre LaMothe. Black Art of 3D Game Programming. 1995, ISBN 1-57169-004-2, pp. 14, 398, 935-936, 941-943.
  10. "Ray shooting, depth orders and hidden surface removal", by Mark de Berg, Springer-Verlag, 1993, ISBN 3-540-57020-9, 201 pp.

External links