Multiple-instance learning
From Wikipedia, the free encyclopedia
This article is orphaned as few or no other articles link to it. Please help introduce links in articles on related topics. (December 2007) |
Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept from data consisting of a sequence of instances, each labeled as positive or negative, and each described as a set of vectors. The instance is positive if at least one of the vectors in its set lies within the intended concept, and negative if none of the vectors lies within the concept; the task is to learn an accurate description of the concept from this information.
Multiple-instance learning was originally proposed under this name by Dietterich, Lathrop & Lozano-Pérez (1997), but earlier examples of similar research exist, for instance in the work on handwritten digit recognition by Keeler, Rumelhart & Leow (1990).
Numerous researchers have worked on adapting classical classification techniques, such as support vector machines or boosting, to work within the context of multiple-instance learning.
[edit] References
- Dietterich, Thomas G.; Lathrop, Richard H. & Lozano-Pérez, Tomás (1997), “Solving the multiple instance problem with axis-parallel rectangles”, Artificial Intelligence 89 (1–2): 31–71, DOI 10.1016/S0004-3702(96)00034-3.
- Keeler, James D.; Rumelhart, David E. & Leow, Wee-Kheng (1990), Integrated segmentation and recognition of hand-printed numerals, pp. 557–563.