Interactive visualization
From Wikipedia, the free encyclopedia
Interactive visualization is a branch of graphic visualization in computer science that studies how humans interact with computers to create graphic illustrations of information and how this process can be made more efficient. In order for visualization to be considered interactive, it must satisfy two criteria:
- Human Input: control of some aspect of the visual representation of information, or of the information being represented, must be available to a human, and
- Response Time: changes made by the human must be incorporated into the visualization in a timely manner. In general, interactive visualization is considered a soft real-time task.
One particular type of interactive visualization is virtual reality (VR), where the visual representation of information is presented using an immersive display device such as a stereo projector (see stereoscopy). VR is also characterized by the use of a spatial metaphor, where some aspect of the information is represented in three dimensions so that humans can explore the information as if it were present (where instead it was remote), sized appropriately (where instead it was on a much smaller or larger scale than humans can sense directly), or had shape (where instead it might be completely abstract).
Another type of interactive visualization is collaborative visualization, in which multiple people interact with the same computer visualization to communicate their ideas to each other or to explore information cooperatively. Frequently, collaborative visualization is used when people are physically separated. Using several networked computers, the same visualization can be presented to each person simultaneously. The people then make annotations to the visualization as well as communicate via audio (i.e., telephone), video (i.e., a videoconference), or text (i.e., IRC) messages.
For example, the University of California, Irvine, has a Creative Interactive Visualization Laboratory focused on Large-scale Visualization, Interactive Rendering, and Virtual Reality:
One product of the lab is the InVis system, suitable for viewing medical data sets such as scans of the body:
"Interactive behavior is an essential feature of the system, because it enables the user to manipulate and adjust the visualization straight on demand."
Contents |
[edit] Human control of visualization
The Programmer's Hierarchical Interactive Graphics System (PHIGS) was one of the first programmatic efforts at interactive visualization and provided an enumeration of the types of input humans provide (in the list below). People
- Pick some part of an existing visual representation
- Locate a point of interest (which may not have an existing representation),
- Stroke a path,
- Choose an option from a list of options,
- Valuate by inputting a number, and
- Write by inputting text.
All of these actions that people use to interact with a computer visualization require a physical device of some sort, and input devices may allow people to perform more than one of these actions. Input devices range from the common – keyboards, mice, graphics tablets, trackballs, and touchpads – to the esoteric – wired gloves, boom arms, and even omnidirectional treadmills!
These input actions can be used to control both the information being represented or the way in which the information is presented. When the information being presented is altered, the visualization is usually part of a feedback loop. For example, consider an aircraft avionics system where the pilot inputs roll, pitch, and yaw and the visualization system provides a rendering of the aircraft's new attitude. Another example would be a scientist who changes a simulation while it is running in response to a visualization of its current progress. This is called computational steering.
More frequently, the representation of the information is changed rather than the information itself. See the entries for scientific visualiation and information visualization for more details on how information is represented visually.
[edit] Rapid response to human input
Experiments have shown that a delay of more than 20 ms between when input is provided and a visual representation is updated is noticeable by most people. Thus it is desirable for an interactive visualization to provide a rendering based on human input within this time frame. However, when large amounts of data must be processed to create a visualization, this becomes hard or even impossible with current technology. Thus the term “interactive visualization” is usually applied to systems that provide feedback to users within several seconds of input. The term interactive framerate is often used to measure how interactive a visualization is. Framerates measure the frequency with which an image (a frame) can be generated by a visualization system. A framerate of 50 frames per second (fps) is considered good while 0.1 fps would be considered poor. The use of framerates to characterize interactivity is slightly misleading however, since framerate is a measure of bandwidth while humans are more sensitive to latency. Specifically, it is possible to achieve a good framerate of 50 fps but if the images generated refer to changes to the visualization that a person made more than 1 second ago, it will not feel interactive to a person.
The rapid response time required for interactive visualization is a difficult constraint to meet and there are several approaches that have been explored to provide people with rapid visual feedback based on their input. Some include
- Parallel rendering – where more than one computer or video card is used simultaneously to render an image. Multiple frames can be rendered at the same time by different computers and the results transferred over the network for display on a single monitor. This requires each computer to hold a copy of all the information to be rendered and increases bandwidth, but also increases latency. Also, each computer can render a different region of a single frame and send the results over a network for display. This again requires each computer to hold all of the data and can lead to a load imbalance when one computer is responsible for rendering a region of the screen with more information than other computers. Finally, each computer can render an entire frame containing a subset of the information. The resulting images plus the associated depth buffer can then be sent across the network and merged with the images from other computers. The result is a single frame containing all the information to be rendered, even though no single computer's memory held all of the information. This is called parallel depth compositing and is used when large amounts of information must be rendered interactively.
- Progressive rendering – where a framerate is guaranteed by rendering some subset of the information to be presented and providing incremental (progressive) improvements to the rendering once the visualization is no longer changing.
- Level-of-detail (LOD) rendering – where simplified representations of information are rendered in order to achieve a desired framerate while a person is providing input and then the full representation is used to generate a still image once the person is through manipulating the visualization. One common variant of LOD rendering is subsampling. When the information being represented is stored in a topologically rectangular array (as is common with digital photos, MRI scans, and finite difference simulations), a lower resolution version can easily be generated by skipping n points for each 1 point rendered. Subsampling can also be used to accelerate rendering techniques such as volume visualization that require more than twice the computations for an image twice the size. By rendering a smaller image and then scaling the image to fill the requested screen space, much less time is required to render the same data.
- Frameless rendering – where the visualization is no longer presented as a time series of images, but as a single image where different regions are updated over time.
[edit] External links
[edit] Interactive visualization programs
These links point to interactive visualization programs. Some are open source and freely available. Some are commercial.
- Bunkspeed
- DataTank
- EnSight
- InfoScope
- Lattice 3D
- MayaVi
- Opticore
- OsiriX
- ParaView
- RTT
- SpinFire
- Tecplot
- UGS Teamcenter Visualization
- VisIt
[edit] Interactive visualization libraries
These links point to software libraries that can be used by developers to create interactive visualization programs.
- Insight Segmentation and Registration Toolkit (ITK)
- OpenRM Scene Graph
- Visualization Tool Kit (VTK)
- UGS JT Open Toolkit
[edit] Interactive visualization research
These links point to academic research groups that publish papers in the area of interactive visualization as well as some conferences where these papers are presented.
- Delft University of Technology
- Ohio State University
- Stanford University
- Stony Brook University
- University of California, Davis
- University of California, Irvine
- University of North Carolina
- University of Texas at Austin
- University of Utah
- Zuse Institute Berlin
- ACM SIGCHI
- ACM SIGGRAPH
- ACM VRST
- Eurographics
- IEEE Visualization
- ACM Transactions on Graphics
- IEEE Transactions on Visualization and Computer Graphics
- [1]
- [2]