Multiple-instance learning as a classifier combining problem
Research output: Contribution to journal › Journal article › Research › peer-review
Standard
Multiple-instance learning as a classifier combining problem. / Li, Yan; Tax, David M. J.; Duin, Robert P. W.; Loog, Marco.
In: Pattern Recognition, Vol. 46, No. 3, 2013, p. 865-874.Research output: Contribution to journal › Journal article › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - JOUR
T1 - Multiple-instance learning as a classifier combining problem
AU - Li, Yan
AU - Tax, David M. J.
AU - Duin, Robert P. W.
AU - Loog, Marco
PY - 2013
Y1 - 2013
N2 - In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification posteriors. Given the instance labels, the label of a bag can be obtained as a classifier combining problem. An optimal decision rule is derived that determines the threshold on the fraction of instances in a bag that is assigned to the concept class. We provide estimators for the two parameters in the model. The method is tested on a toy data set and various benchmark data sets, and shown to provide results comparable to state-of-the-art MIL methods. (C) 2012 Elsevier Ltd. All rights reserved.
AB - In multiple-instance learning (MIL), an object is represented as a bag consisting of a set of feature vectors called instances. In the training set, the labels of bags are given, while the uncertainty comes from the unknown labels of instances in the bags. In this paper, we study MIL with the assumption that instances are drawn from a mixture distribution of the concept and the non-concept, which leads to a convenient way to solve MIL as a classifier combining problem. It is shown that instances can be classified with any standard supervised classifier by re-weighting the classification posteriors. Given the instance labels, the label of a bag can be obtained as a classifier combining problem. An optimal decision rule is derived that determines the threshold on the fraction of instances in a bag that is assigned to the concept class. We provide estimators for the two parameters in the model. The method is tested on a toy data set and various benchmark data sets, and shown to provide results comparable to state-of-the-art MIL methods. (C) 2012 Elsevier Ltd. All rights reserved.
KW - Multiple instance learning
KW - Classifier combining
U2 - 10.1016/j.patcog.2012.08.018
DO - 10.1016/j.patcog.2012.08.018
M3 - Journal article
VL - 46
SP - 865
EP - 874
JO - Pattern Recognition
JF - Pattern Recognition
SN - 0031-3203
IS - 3
ER -
ID: 118769055