跳到主要导航 跳到搜索 跳到主要内容

Multitask Learning for Classification Problem via New Tight Relaxation of Rank Minimization

  • Northwestern Polytechnical University Xian
  • Zhejiang University

科研成果: 期刊稿件文章同行评审

9 引用 (Scopus)

摘要

Multitask learning (MTL) is a joint learning paradigm, which fuses multiple related tasks together to achieve the better performance than single-task learning methods. It has been observed by many researchers that different tasks with certain similarities share a low-dimensional common yet latent subspace. In order to get the low-rank structure shared across tasks, trace norm has been used as a convex relaxation of the rank minimization problem. However, trace norm is not a tight approximation for the rank function. To address this important issue, we propose two novel regularization-based models to approximate the rank minimization problem by minimizing the k minimal singular values. For our new models, if the minimal singular values are suppressed to zeros, the rank would also be reduced. Compared with the standard trace norm, our new regularization-based models are the tighter approximations, which can help our models capture the low-dimensional subspace among multiple tasks better. Besides, it is an NP-hard problem to directly solve the exact rank minimization problem for our models. In this article, we proposed two simple but effective strategies to optimize our models, which tactically solves the exact rank minimization problem by setting a large penalizing parameter. Experimental results performed on synthetic and real-world benchmark datasets demonstrate that the proposed models have the ability of learning the low-rank structure shared across tasks and the better performance than other classical MTL methods.

源语言英语
页(从-至)6055-6068
页数14
期刊IEEE Transactions on Neural Networks and Learning Systems
34
9
DOI
出版状态已出版 - 1 9月 2023

指纹

探究 'Multitask Learning for Classification Problem via New Tight Relaxation of Rank Minimization' 的科研主题。它们共同构成独一无二的指纹。

引用此