Multiple-instance learning

From Wikipedia, the free encyclopedia


Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept from data consisting of a sequence of instances, each labeled as positive or negative, and each described as a set of vectors. The instance is positive if at least one of the vectors in its set lies within the intended concept, and negative if none of the vectors lies within the concept; the task is to learn an accurate description of the concept from this information.

Multiple-instance learning was originally proposed under this name by Dietterich, Lathrop & Lozano-Pérez (1997), but earlier examples of similar research exist, for instance in the work on handwritten digit recognition by Keeler, Rumelhart & Leow (1990).

Numerous researchers have worked on adapting classical classification techniques, such as support vector machines or boosting, to work within the context of multiple-instance learning.

[edit] References

  • Keeler, James D.; Rumelhart, David E. & Leow, Wee-Kheng (1990), Integrated segmentation and recognition of hand-printed numerals, pp. 557–563 .