TY - JOUR
T1 - Multi-task learning via sharing inexact low-rank subspace
AU - Wang, Xiaoqian
AU - Nie, Feiping
N1 - Publisher Copyright:
© 2021 IEEE
PY - 2021
Y1 - 2021
N2 - Multi-task learning algorithms enhance learning performance by exploring the relations among multiple tasks. By pooling data from different yet relevant tasks together, tasks can benefit from each other in this jointly learning mechanism. In this paper, we study the relations among multiple tasks by properly learning their shared common subspace. Previous works usually constrain the shared subspace to be low-rank since tasks are assumed to be intrinsically related. However, this constraint is too strict for real applications when noise exists. Instead, we propose to detect an inexact low-rank subspace, which provides an approximation of the low-rank subspace. This makes our learned multi-task parameter matrix more robust in the circumstances of noise. We use alternating optimization algorithm to optimize our new objective and obtain an algorithm with the same time complexity as the single task learning. We provide extensive empirical results on both synthetic and benchmark datasets to illustrate the superiority of our method over other related multi-task learning methods. Our method shows apparent robustness in high portion of noise. Moreover, it possesses a major superiority when few training data are available. This is important in practical use, especially when accessing more data involves arduous work.
AB - Multi-task learning algorithms enhance learning performance by exploring the relations among multiple tasks. By pooling data from different yet relevant tasks together, tasks can benefit from each other in this jointly learning mechanism. In this paper, we study the relations among multiple tasks by properly learning their shared common subspace. Previous works usually constrain the shared subspace to be low-rank since tasks are assumed to be intrinsically related. However, this constraint is too strict for real applications when noise exists. Instead, we propose to detect an inexact low-rank subspace, which provides an approximation of the low-rank subspace. This makes our learned multi-task parameter matrix more robust in the circumstances of noise. We use alternating optimization algorithm to optimize our new objective and obtain an algorithm with the same time complexity as the single task learning. We provide extensive empirical results on both synthetic and benchmark datasets to illustrate the superiority of our method over other related multi-task learning methods. Our method shows apparent robustness in high portion of noise. Moreover, it possesses a major superiority when few training data are available. This is important in practical use, especially when accessing more data involves arduous work.
KW - Low-rank subspace
KW - Multi-task learning
KW - Robust model
UR - http://www.scopus.com/inward/record.url?scp=85115065074&partnerID=8YFLogxK
U2 - 10.1109/ICASSP39728.2021.9414782
DO - 10.1109/ICASSP39728.2021.9414782
M3 - 会议文章
AN - SCOPUS:85115065074
SN - 1520-6149
VL - 2021-June
SP - 3690
EP - 3694
JO - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
JF - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
T2 - 2021 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2021
Y2 - 6 June 2021 through 11 June 2021
ER -