TY - GEN
T1 - Self-Paced and Discrete Multiple Kernel k-Means
AU - Lu, Yihang
AU - Zheng, Xuan
AU - Lu, Jitao
AU - Wang, Rong
AU - Nie, Feiping
AU - Li, Xuelong
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/10/17
Y1 - 2022/10/17
N2 - Multiple Kernel K-means (MKKM) uses various kernels from different sources to improve clustering performance. However, most of the existing models are non-convex, which is prone to be stuck into bad local optimum, especially with noise and outliers. To address the issue, we propose a novel Self-Paced and Discrete Multiple Kernel K-Means (SPD-MKKM). It learns the MKKM model in a meaningful order by progressing both samples and kernels from easy to complex, which is beneficial to avoid bad local optimum. In addition, whereas existing methods optimize in two stages: learning the relaxation matrix and then finding the discrete one by extra discretization, our work can directly gain the discrete cluster indicator matrix without extra process. What's more, a well-designed alternative optimization is employed to reduce the overall computational complexity via using the coordinate descent technique. Finally, thorough experiments performed on real-world datasets illustrated the excellence and efficacy of our method.
AB - Multiple Kernel K-means (MKKM) uses various kernels from different sources to improve clustering performance. However, most of the existing models are non-convex, which is prone to be stuck into bad local optimum, especially with noise and outliers. To address the issue, we propose a novel Self-Paced and Discrete Multiple Kernel K-Means (SPD-MKKM). It learns the MKKM model in a meaningful order by progressing both samples and kernels from easy to complex, which is beneficial to avoid bad local optimum. In addition, whereas existing methods optimize in two stages: learning the relaxation matrix and then finding the discrete one by extra discretization, our work can directly gain the discrete cluster indicator matrix without extra process. What's more, a well-designed alternative optimization is employed to reduce the overall computational complexity via using the coordinate descent technique. Finally, thorough experiments performed on real-world datasets illustrated the excellence and efficacy of our method.
KW - clustering
KW - multiple kernel k-means
KW - self-paced learning
UR - http://www.scopus.com/inward/record.url?scp=85140831396&partnerID=8YFLogxK
U2 - 10.1145/3511808.3557696
DO - 10.1145/3511808.3557696
M3 - 会议稿件
AN - SCOPUS:85140831396
T3 - International Conference on Information and Knowledge Management, Proceedings
SP - 4284
EP - 4288
BT - CIKM 2022 - Proceedings of the 31st ACM International Conference on Information and Knowledge Management
PB - Association for Computing Machinery
T2 - 31st ACM International Conference on Information and Knowledge Management, CIKM 2022
Y2 - 17 October 2022 through 21 October 2022
ER -