Gesture recognition
From Wikipedia, the free encyclopedia
Gesture Recognition is a topic in computer science with goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion or state but commonly originate from the face or hand. Current problems in the field include emotion recognition from the face and hand gesture recognition. Many initial approaches have been made using cameras and computer vision algorithms to understand sign language
Gesture Recognition can be seen as a way for computers to begin to understand human Body language, this building a far richer bridge between machines and humans than primitive text interfaces or even GUIs (Graphical User Interfaces), which still limit the majority of input to keyboard and mouse.
Using 2 video stereo cameras together with a positioning reference such as a lexian-stripe or infrared emitters enables humans to interface with the machine (HMI) and interact naturally without any mechanical devices. It is possible to point a finger at the computer screen (or any other screen) and the cursor will move accordingly on the screen. This could potentially make conventional input devices such as mouse, keyboards and even touch-screens redundant. Such motion interactive media (MIM) is beginning to pave it's way into commercial applications.
Gesture recognition is conducted with techniques from computer vision and image processing.