Localized multiple kernel learning via sample-wise alternating optimization

Yina Han, Kunde Yang, Yuanliang Ma, Guizhong Liu

科研成果: 期刊稿件文章同行评审

42 引用 (Scopus)

摘要

Our objective is to train support vector machines (SVM)-based localized multiple kernel learning (LMKL), using the alternating optimization between the standard SVM solvers with the local combination of base kernels and the sample-specific kernel weights. The advantage of alternating optimization developed from the state-of-the-art MKL is the SVM-tied overall complexity and the simultaneous optimization on both the kernel weights and the classifier. Unfortunately, in LMKL, the sample-specific character makes the updating of kernel weights a difficult quadratic nonconvex problem. In this paper, starting from a new primal-dual equivalence, the canonical objective on which state-of-the-art methods are based is first decomposed into an ensemble of objectives corresponding to each sample, namely, sample-wise objectives. Then, the associated sample-wise alternating optimization method is conducted, in which the localized kernel weights can be independently obtained by solving their exclusive sample-wise objectives, either linear programming (for $l1-norm) or with closed-form solutions (for $l p-norm). At test time, the learnt kernel weights for the training data are deployed based on the nearest-neighbor rule. Hence, to guarantee their generality among the test part, we introduce the neighborhood information and incorporate it into the empirical loss when deriving the sample-wise objectives. Extensive experiments on four benchmark machine learning datasets and two real-world computer vision datasets demonstrate the effectiveness and efficiency of the proposed algorithm.

源语言英语
文章编号6485021
页(从-至)137-148
页数12
期刊IEEE Transactions on Cybernetics
44
1
DOI
出版状态已出版 - 1月 2014

指纹

探究 'Localized multiple kernel learning via sample-wise alternating optimization' 的科研主题。它们共同构成独一无二的指纹。

引用此