W3C MMI
From Wikipedia, the free encyclopedia
The Multimodal Interaction Activity is an initiative from W3C aiming to provide means (mostly XML) to support Multimodal Interaction scenarios on the Web.
This activity was launched in 2002. The Multimodal Interaction Framework Working group has already produced :
- the Multimodal Interaction Framework, providing a general Framework for Multimodal Interaction, and the kinds of markup languages being considered.
- A set of Use cases.
- A set of Core Requirements, which describes the fundamental requirements to address in the future specifications.
The set of devices that are considered are : Mobile phones, automotive telematics, PCs connected on the Web.
[edit] Current Work
The following XML specifications (currently in advanced Working draft state) are already addressing various parts of the Core Requirements :
- EMMA (Extensible Multi-Modal Annotations) : a data exchange format for the interface between input processors and interaction management systems. It will define the means for recognizers to annotate application specific data with information such as confidence scores, time stamps, input mode (e.g. key strokes, speech or pen), alternative recognition hypotheses, and partial recognition results etc.
- InkML - an XML language for digital ink traces : an XML data exchange format for ink entered with an electronic pen or stylus as part of a multimodal system.
- Multimodal Architecture : A loosely coupled architecture for the Multimodal Interaction Framework that focuses on providing a general means for components to communicate with each other, plus basic infrastructure for application control and platform services.
[edit] See also
- Multimodal Interaction
- VoiceXML - the W3C's standard XML format for specifying interactive voice dialogues between a human and a computer.
- SSML - Speech Synthesis Markup Language
- CCXML - Call Control eXtensible Markup Language
- SCXML - an XML language that provides a generic state-machine based execution environment