Matrix-Regularized multiple kernel learning via (r, p) Norms

Yina Han, Yixin Yang, Xuelong Li, Qingyu Liu, Yuanliang Ma

Research output: Contribution to journalArticlepeer-review

23 Scopus citations

Abstract

This paper examines a matrix-regularized multiple kernel learning (MKL) technique based on a notion of (r, p) norms. For the problem of learning a linear combination in the support vector machine-based framework, model complexity is typically controlled using various regularization strategies on the combined kernel weights. Recent research has developed a generalized ℓ p-norm MKL framework with tunable variable p( p ≥ 1) to support controlled intrinsic sparsity. Unfortunately, this "1-D" vector ≤ p-norm hardly exploits potentially useful information on how the base kernels "interact." To allow for higher order kernel-pair relationships, we extend the "1-D" vector ≤ p-MKL to the "2-D" matrix (r, p) norms (1 ≤ r, p < ∞). We develop a new formulation and an efficient optimization strategy for (r, p)-MKL with guaranteed convergence. A theoretical analysis and experiments on seven UCI data sets shed light on the superiority of (r, p)-MKL over ℓ p-MKL in various scenarios.

Original languageEnglish
Article number8259375
Pages (from-to)4997-5007
Number of pages11
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume29
Issue number10
DOIs
StatePublished - Oct 2018
Externally publishedYes

Keywords

  • Generalization bound
  • matrix regularization
  • multiple kernel learning (MKL)
  • support vector machine (SVM)

Fingerprint

Dive into the research topics of 'Matrix-Regularized multiple kernel learning via (r, p) Norms'. Together they form a unique fingerprint.

Cite this