Hands-on computing
Hands-On Computing is a branch of Human-Computer Interaction research, which focuses on computer interfaces that respond to human touch or expression, allowing the machine and the user to interact physically. Hands-on computing can make complicated computer tasks more natural to users by attempting to respond to motions and interactions that are natural to human behavior. Thus hands-on computing is a component of user-centered design, focusing on how users physically respond to virtual environments.
Implementations
- Keyboards
- Stylus Pens and Tablets
- Touchscreens[1]
- Human Signaling
Keyboards
Keyboards and typewriters are some of the earliest hands-on computing devices. These devices are effective because users receive kinesthetic feedback, tactile feedback, auditory feedback, and visual feedback. The QWERTY layout of the keyboard is one of the first designs, dating to 1878[2] New designs such as the split keyboard increase the comfortability of typing for users. Keyboards input directions to the computer via keys; however, keyboards do not allow the user direct interaction with the computer through touch or expression.
Stylus Pens and Tablets
Tablets are touch-sensitive surfaces that detect the pressure applied by a stylus pen. This works via changes in magnetic fields or by bringing together two resistive sheets, for magnetic tablets and resistive tablets respectively. Tablets allow users to interact with computers by touching through a stylus pen, yet they do not respond directly to a user's touch.
Touchscreens
Touchscreen allow users to directly interact with computers by touching the screen with his or her finger. It is natural for humans to point to objects in order to show a preference or a selection. Touchscreens allow users to take this natural action and use it to interact with computers. Problems arise due to inaccuracy: people attempt to make a selection, but due to incorrect calibration, the computer does not accurately process the touch.
Human Signaling
New developments in Hands-On Computing have led to the creation of interfaces that can respond to gestures and facial signaling. Often haptic devices like a glove have to be worn to translate the gesture into a recognizable command. The natural actions of pointing, grabbing, and tapping are common ways to interact with the computer interface. The latest studies include using the eye signals to indicate selection or control the cursor. Blinking and the gaze of the eye are used to communicate selections. Computers can also respond to speech inputs. Developments in this technology have made it possible for users to dictate phrases to the computer instead of type them to display text on an interface. Utilizing human signal inputs allows more people to interact with computers and do so in a way that is humanly natural.
Current Problems
There are still many problems with hands-on computing interfaces that are currently being eradicated through continuing research and development. The problem of creating a simple, user-friendly interface and developing it in an inexpensive and mass producible way is the main complication in hands-on computing technologies. Because some interactions between human and machine are ambiguous, the mechanical response is not always the desired result for the user. Different hand gestures and facial expressions can lead the computer to interpret one command, while the user wished to convey another one entirely. Solving this problem is currently one of the main focuses in research and development.
Researchers are also working to find the best way to design hands-on computing devices, so that the consumer can use the product easily. Focusing on user-centered design while creating hands-on computing products helps developers make the best and easiest to use product.
Research and Development
This new field has a lot of room for contributions in research and product development. Hands-on computing technologies require scientists and engineers to approach a different problem solving strategy which considers the devices for interaction rather than just input; the interaction devices in terms of tool use; how interaction will mediate user-performance; and the context in which the devices will be used.[2]
In order for a machine to be successfully used, people need to be able to transfer some of their current skill set to operate the machine. This can be done directly by comparing the interface to a known and familiar topic to help people understand, or by aiding the user to draw new inferences through feedback. Users have to be able to understand how to use and manipulate the interface in order to use it to its full capability. By applying their current skills, users can operate the machine without learning new concepts and approaches[3]
References
- "ThinSight". Microsoft Research and Development. 19 November 2008. http://research.microsoft.com/cml/handsOn.aspx#ThinSight
- "Office XP Speaks Out". Microsoft PressPass. 18 Apr. 2001. Microsoft. 5 Dec. 2008
- ↑ http://research.microsoft.com/sendev/
- ↑ 2.0 2.1 Baber, Christopher. "Beyond the Desktop." Academic Press. 1997, additional text.
- ↑ Waern, Y. "Human Learning of Human-Computer Interaction: An Introduction." Cognitive Ergonomics: Understanding, Learning and Designing Human-Computer Interaction (1990): 69-84., additional text.