Sensor fusion

From Wikipedia, the free encyclopedia

Sensor fusion is the combining of sensory data or data derived from sensory data from disparate sources such that the resulting information is in some sense better than would be possible when these sources were used individually. The term better in that case can mean more accurate, more complete, or more dependable, or refer to the result of an emerging view, such as stereoscopic vision (calculation of depth information by combining two-dimensional images from two cameras at slightly different viewpoints).

The data sources for a fusion process are not specified to originate from identical sensors. One can distinguish direct fusion, indirect fusion and fusion of the outputs of the former two. Direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like a priori knowledge about the environment and human input.

Sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.

Transducer Markup Language (TML) is a XML based markup language which enables sensor fusion.

Contents

[edit] Examples of sensors

[edit] Sensor fusion algorithms

Sensor fusion is thus an overarching term for a number of methods and algorithms. Just to mention a few:

[edit] Levels

There are several categories or levels of sensor fusion that are commonly used.

  • Level 0 - Signal and Feature Assessment
  • Level 1 - Entity Assessment
  • Level 2 - Situation Assessment
  • Level 3 - Impact Assessment
  • Level 4 - Performance Assessment

Rethinking JDL Data Fusion Levels - Bowman Steinberg

[edit] See also

[edit] External links

In other languages