Abstract
This paper examines a matrix-regularized multiple kernel learning (MKL) technique based on a notion of (r, p) norms. For the problem of learning a linear combination in the support vector machine-based framework, model complexity is typically controlled using various regularization strategies on the combined kernel weights. Recent research has developed a generalized ℓ p-norm MKL framework with tunable variable p( p ≥ 1) to support controlled intrinsic sparsity. Unfortunately, this "1-D" vector ≤ p-norm hardly exploits potentially useful information on how the base kernels "interact." To allow for higher order kernel-pair relationships, we extend the "1-D" vector ≤ p-MKL to the "2-D" matrix (r, p) norms (1 ≤ r, p < ∞). We develop a new formulation and an efficient optimization strategy for (r, p)-MKL with guaranteed convergence. A theoretical analysis and experiments on seven UCI data sets shed light on the superiority of (r, p)-MKL over ℓ p-MKL in various scenarios.
Original language | English |
---|---|
Article number | 8259375 |
Pages (from-to) | 4997-5007 |
Number of pages | 11 |
Journal | IEEE Transactions on Neural Networks and Learning Systems |
Volume | 29 |
Issue number | 10 |
DOIs | |
State | Published - Oct 2018 |
Externally published | Yes |
Keywords
- Generalization bound
- matrix regularization
- multiple kernel learning (MKL)
- support vector machine (SVM)