TY - GEN
T1 - Calibrated multi-task learning
AU - Nie, Feiping
AU - Hu, Zhanxuan
AU - Li, Xuelong
N1 - Publisher Copyright:
© 2018 Association for Computing Machinery.
PY - 2018/7/19
Y1 - 2018/7/19
N2 - This paper proposes a novel algorithm, named Non-Convex Calibrated Multi-Task Learning (NC-CMTL), for learning multiple related regression tasks jointly. Instead of utilizing the nuclear norm, NC-CMTL adopts a non-convex low rank regularizer to explore the shared information among different tasks. In addition, considering that the regularization parameter for each regression task desponds on its noise level, we replace the least squares loss function by square-root loss function. Computationally, as proposed model has a non-smooth loss function and a non-convex regularization term, we construct an efficient re-weighted method to optimize it. Theoretically, we first present the convergence analysis of constructed method, and then prove that the derived solution is a stationary point of original problem. Particularly, the regularizer and optimization method used in this paper are also suitable for other rank minimization problems. Numerical experiments on both synthetic and real data illustrate the advantages of NC-CMTL over several state-of-the-art methods.
AB - This paper proposes a novel algorithm, named Non-Convex Calibrated Multi-Task Learning (NC-CMTL), for learning multiple related regression tasks jointly. Instead of utilizing the nuclear norm, NC-CMTL adopts a non-convex low rank regularizer to explore the shared information among different tasks. In addition, considering that the regularization parameter for each regression task desponds on its noise level, we replace the least squares loss function by square-root loss function. Computationally, as proposed model has a non-smooth loss function and a non-convex regularization term, we construct an efficient re-weighted method to optimize it. Theoretically, we first present the convergence analysis of constructed method, and then prove that the derived solution is a stationary point of original problem. Particularly, the regularizer and optimization method used in this paper are also suitable for other rank minimization problems. Numerical experiments on both synthetic and real data illustrate the advantages of NC-CMTL over several state-of-the-art methods.
KW - Calibration
KW - Loss function
KW - Multi-Task Learning
KW - Regression
UR - http://www.scopus.com/inward/record.url?scp=85051551901&partnerID=8YFLogxK
U2 - 10.1145/3219819.3219951
DO - 10.1145/3219819.3219951
M3 - 会议稿件
AN - SCOPUS:85051551901
SN - 9781450355520
T3 - Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
SP - 2012
EP - 2021
BT - KDD 2018 - Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
PB - Association for Computing Machinery
T2 - 24th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2018
Y2 - 19 August 2018 through 23 August 2018
ER -