TY - JOUR
T1 - Localized multiple kernel learning via sample-wise alternating optimization
AU - Han, Yina
AU - Yang, Kunde
AU - Ma, Yuanliang
AU - Liu, Guizhong
PY - 2014/1
Y1 - 2014/1
N2 - Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for $l1-norm) or with closed-form solutions (for $l p-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
AB - Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for $l1-norm) or with closed-form solutions (for $l p-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.
KW - Local learning
KW - multiple kernel learning
KW - support vector machine
UR - http://www.scopus.com/inward/record.url?scp=84891130319&partnerID=8YFLogxK
U2 - 10.1109/TCYB.2013.2248710
DO - 10.1109/TCYB.2013.2248710
M3 - 文章
AN - SCOPUS:84891130319
SN - 2168-2267
VL - 44
SP - 137
EP - 148
JO - IEEE Transactions on Cybernetics
JF - IEEE Transactions on Cybernetics
IS - 1
M1 - 6485021
ER -