3D Interaction
From Wikipedia, the free encyclopedia
This article is orphaned as few or no other articles link to it. Please help introduce links in articles on related topics. (November 2007) |
3D interaction occurs when users are able to move and perform interaction in 3D space. Human-machine interaction requires that both human and machine receive and process information, and then present the output of that processing to each other. Humans perform an action or give a command to the machine in order to achieve a goal. The machine takes the information provided by the user, performs some processing, and then presents the results back to the user.
Contents |
[edit] Background
3D’s early beginnings can be traced back to 1962 when Morton Heilig invented the Sensorama simulator. It provided 3D video feedback, as well motion, audio, and haptic feedbacks to produce a virtual environment. The next stage of development was Dr. Ivan Sutherland's completion of his pioneering work in 1968. He created a head-mounted display that produced a 3D, virtual environment by presenting a left and right still image of that environment.
Availability of technology as well as impractical costs held back the development and application of virtual environments until the 1980s. Applications were limited to military ventures in the United States. Since then, further research and technological advancements have allowed new doors to be opened to application in various other areas such as education, entertainment, and manufacturing.
[edit] 3D interaction
In 3D Interaction, users carry out their tasks and perform functions by exchanging information with computer systems in 3D space. It is an intuitive type of interaction because humans interact in three dimensions in the real world. The tasks that users perform have been classified as selection and manipulation of objects in virtual space, navigation, and system control. Tasks can be performed in virtual space through interaction techniques and by utilizing interaction devices. 3D interaction techniques were classified according to the task group it supports. Techniques that support navigation tasks are classified as navigation techniques. Techniques that support object selection and manipulation are labeled selection and manipulation techniques. Lastly, system control techniques support tasks that have to do with controlling the application itself. A consistent and efficient mapping between techniques and interaction devices must be made in order for the system to be usable and effective. Interfaces associated with 3D interaction are called 3D interfaces. Like other types of user interfaces, it involves two-way communication between users and system, but allows users to perform action in 3D space. Input devices permit the users to give directions and commands to the system, while output devices allow the machine to present information back to them.
3D Interfaces have been used in applications that feature virtual environments, and augmented and mixed realities. In virtual environments, users may interact directly with the environment or use tools with specific functionalities to do so. 3D interaction occurs when physical tools are controlled in 3D spatial context to control a corresponding virtual tool.
Users experience a sense of presence when engaged in an immersive virtual world. Enabling the users to interact with this world in 3D allows them to make use of natural and intrinsic knowledge of how information exchange takes place with physical objects in the real world. Texture, sound, and speech can all be used to augment 3D interaction. Currently, users still have difficulty in interpreting 3D space visuals and understanding how interaction occurs. Although it’s a natural way for humans to move around in a three-dimensional world, the difficulty exists because many of the cues present in real environments are missing from virtual environments. Perception and occlusion are the primary perceptual cues used by humans. Also, even though scenes in virtual space appear three-dimensional, they are still displayed on a 2D surface so some inconsistencies in depth perception will still exist.
[edit] 3D user interfaces
User interfaces are the means for communication between users and systems. 3D interfaces include mediums for 3D representation of system state, and mediums for 3D user input or manipulation. Using 3D representations is not enough to create 3D interaction. The users must have a way of performing actions in 3D as well. To that effect, special input and output devices have been developed to support this type of interaction. Some, such as the 3D mouse, were developed based on existing devices for 2D interaction.
[edit] Input Devices
Input Devices are instruments used to manipulate objects, and send control instructions to the computer system. They vary in terms of degrees of freedom available to them and can be classified into standard input devices, trackers, control devices, navigation equipment, and gesture interfaces.
Standard input devices include keyboards, tablets and stylus, joysticks, mice, touch screens, knobs, and trackballs.
Trackers detect or monitor head, hand or body movements and send that information to the computer. The computer then translates it and ensures that position and orientation are reflected accurately in the virtual world. Tracking is important in presenting the correct viewpoint, coordinating the spatial and sound information presented to users as well the tasks or functions that they could perform. 3D trackers have been identified as mechanical, magnetic, ultrasonic, optical, and hybrid inertial. Examples of trackers include motion trackers, eye trackers, and data gloves.
A simple 2D mouse may be considered a navigation device if it allows the user to move to a different location in 3D space. Navigation devices such as the treadmill and bicycle make use of the natural ways that humans travel in the real world. Treadmills simulate walking or running and bicycles or similar type equipment simulate vehicular travel. In the case of navigation devices, the information passed on to the machine is the user’s location and movements in virtual space.
Wired gloves and bodysuits allow gestural interaction to occur. These send hand or body position and movement information to the computer using sensors.
[edit] Output Devices
Output devices allow the machine to provide information or feedback to the user. They include visual displays, auditory displays, and haptic displays. Visual displays provide feedback to users in 3D visual form. Head-mounted displays and CAVEs (Cave Automatic Virtual Environment) are examples of a fully-immersive visual display, where the user can see only the virtual world and not the real world. Semi-immersive displays allow users to see both. Monitors and workbenches are examples of semi-immersive displays. Auditory displays provide information in auditory form. This is especially useful when supplying location and spatial information to the users. Adding background audio component to a display adds to the sense of realism. Haptic displays send tactile feedback or feeling back to the user.
[edit] 3D Interaction Techniques
Interaction techniques are methods used in order to execute different types of task in 3D space. Techniques are classified according to the tasks that they support.
[edit] Selection and Manipulation
Users need to be able to manipulate virtual objects. Manipulation tasks involve selecting and moving an object. Sometimes, rotation of the object is involved as well. Direct-hand manipulation is the most natural technique because manipulating physical objects with the hand is intuitive for humans. However, this is not always possible. A virtual hand that can select and re-locate virtual objects will work as well. 3D widgets can be used to put controls on objects. Users can employ these to re-locate or re-orient an object. Other techniques include the Go-Go technique and ray casting, where a virtual ray is used to point to, and select and object.
[edit] Navigation
The computer needs to provide the user with information regarding location and movement. Navigation tasks have two components. Travel involves moving from the current location to the desired point. Wayfinding refers to finding and setting routes to get to a travel goal within the virtual environment.
- Wayfinding : Wayfinding in virtual space is different and more difficult to do than in the real world because synthetic environments are often missing perceptual cues and movement constraints. It can be supported using user-centred techniques such as using a larger field of view and supplying motion cues, or environment-centred techniques like structural organization and wayfinding principles.
- Travel : Good travel techniques allow the user to easily move through the environment. There are three types of travel tasks namely, exploration, search, and maneuvering. Travel techniques can be classified into the following five categories:
- Physical movement – user moves through the virtual world
- Manual Viewpoint manipulation – use hand motions to achieve movement
- Steering – direction specification
- Target-based travel – destination specification
- Route planning – path specification
[edit] System control
Tasks that involve issuing commands to the application in order to change system mode or activate some functionality fall under the category of system control. Techniques that support system control tasks in three-dimensions are classified as:
- Graphical menus
- Voice commands
- Gestural interaction
- Virtual tools with specific functions
[edit] See also
[edit] References
- Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2001, February). An Introduction to 3-D User Interface Design. Presence, 10(1), 96-108.
- Bowman, D., Kruijff, E., LaViola, J., Poupyrev, I. (2005). 3D User Interfaces: Theory and Practice. Boston: Addison-Wesley.
- Burdea, G. C., Coiffet, P. (2003). Virtual Reality Technology (2nd ed.). New Jersey: John Wiley & Sons Inc.
- Carroll, J. M. (2002). Human-Computer Interaction in the New Millennium. New York: ACM Press
- Csisinko, M., Kaufmann, H. (2007, March). Towards a Universal Implementation of 3D User Interaction Techniques [Proceedings of Specification, Authoring, Adaptation of Mixed Reality User Interfaces Workshop, IEEE VR]. Charlotte, NC, USA.
- Fröhlich, B.; Plate, J. (2000). "The Cubic Mouse: A New Device for 3D Input". Proceedings of ACM CHI 2000: 526-531, New York: ACM Press. doi:10.1145/332040.332491.
- Keijser, J.; Carpendale, S.; Hancock, M.; Isenberg, T. (2007). "Exploring 3D Interaction in Alternate Control-Display Space Mappings". Proceedings of the 2nd IEEE Symposium on 3D User Interfaces: 526-531, Los Alamitos, CA: IEEE Computer Society.
- Larijani, L. C. (1993). The Virtual Reality Primer. United States of America: R. R. Donnelley and Sons Company.
- Rhijn, A. (2006). Configurable Input Devices for 3D Interaction using Optical Tracking. Eindhoven: Technische Universiteit Eindhoven.
- Stuerzlinger, W., Dadgari, D., Oh, J-Y. (2006, April). Reality-Based Object Movement Techniques for 3D. CHI 2006 Workshop: "What is the Next Generation of Human-Computer Interaction?". Workshop presentation.
- Vince, J. (1998). Essential Virtual Reality Fast. Great Britain: Springer-Verlag London Limited
- Yuan, C., (2005, December). Seamless 3D Interaction in AR – A Vision Based Approach. In Proceedings of the First International Symposium, ISVC (pp. 321-328). Lake Tahoe, NV, USA: Springer Berlin/ Heidelberg.
- The CAVE (CAVE Automatic Virtual Environment). Visited March 28, 2007
- Virtual Reality. Visited March 28, 2007
- The Java 3-D Enabled CAVE at the Sun Centre of Excellence for Visual Genomics. Visited March 28, 2007
- 3D Interaction With and From Handheld Computers. Visited March 28, 2007