TY - GEN
T1 - DISCRETE MULTI-KERNEL K-MEANS WITH DIVERSE AND OPTIMAL KERNEL LEARNING
AU - Lu, Yihang
AU - Lu, Jitao
AU - Wang, Rong
AU - Nie, Feiping
N1 - Publisher Copyright:
© 2022 IEEE
PY - 2022
Y1 - 2022
N2 - Multiple Kernel k-means and its variants integrate a group of kernels to improve clustering performance, but it still has some drawbacks: 1) linearly combining base kernels to get the optimal one limits the kernel representability and cuts off the negotiation of kernel learning and clustering; 2) ignoring the correlation among kernels leads to kernel redundancy; 3) solving NP-hard cluster assignment problem by a two-stage strategy leads to information loss. In this paper, we propose the Discrete Multi-kernel k-means with Diverse and Optimal Kernel Learning (DMK-DOK) model, which adaptively seeks for a better kernel by residing in the base kernel neighborhood and negotiates the kernel learning and clustering. Moreover, it implicitly penalizes the highly correlated kernels to enhance the kernel fusion with less redundancy and more diversity. What's more, it jointly learns discrete and relaxed labels in the same optimization objective, which can avoid information loss. Lastly, extensive experiments conducted on real-world datasets illustrated the superiority of our model.
AB - Multiple Kernel k-means and its variants integrate a group of kernels to improve clustering performance, but it still has some drawbacks: 1) linearly combining base kernels to get the optimal one limits the kernel representability and cuts off the negotiation of kernel learning and clustering; 2) ignoring the correlation among kernels leads to kernel redundancy; 3) solving NP-hard cluster assignment problem by a two-stage strategy leads to information loss. In this paper, we propose the Discrete Multi-kernel k-means with Diverse and Optimal Kernel Learning (DMK-DOK) model, which adaptively seeks for a better kernel by residing in the base kernel neighborhood and negotiates the kernel learning and clustering. Moreover, it implicitly penalizes the highly correlated kernels to enhance the kernel fusion with less redundancy and more diversity. What's more, it jointly learns discrete and relaxed labels in the same optimization objective, which can avoid information loss. Lastly, extensive experiments conducted on real-world datasets illustrated the superiority of our model.
KW - Kernel method
KW - Multiple Kernel Clustering
KW - Multiple Kernel k-means
UR - http://www.scopus.com/inward/record.url?scp=85134013503&partnerID=8YFLogxK
U2 - 10.1109/ICASSP43922.2022.9747734
DO - 10.1109/ICASSP43922.2022.9747734
M3 - 会议稿件
AN - SCOPUS:85134013503
T3 - ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
SP - 1186
EP - 1190
BT - 2022 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 47th IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2022
Y2 - 23 May 2022 through 27 May 2022
ER -