TY - JOUR
T1 - Multilevel Contrastive Multiview Clustering With Dual Self-Supervised Learning
AU - Bian, Jintang
AU - Lin, Yixiang
AU - Xie, Xiaohua
AU - Wang, Chang Dong
AU - Yang, Lingxiao
AU - Lai, Jian Huang
AU - Nie, Feiping
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Multiview clustering (MVC) aims to integrate multiple related but different views of data to achieve more accurate clustering performance. Contrastive learning has found many applications in MVC due to its successful performance in unsupervised visual representation learning. However, existing MVC methods based on contrastive learning overlook the potential of high similarity nearest neighbors as positive pairs. In addition, these methods do not capture the multilevel (i.e., cluster, instance, and prototype levels) representational structure that naturally exists in multiview datasets. These limitations could further hinder the structural compactness of learned multiview representations. To address these issues, we propose a novel end-to-end deep MVC method called multilevel contrastive MVC (MCMC) with dual self-supervised learning (DSL). Specifically, we first treat the nearest neighbors of an object from the latent subspace as the positive pairs for multiview contrastive loss, which improves the compactness of the representation at the instance level. Second, we perform multilevel contrastive learning (MCL) on clusters, instances, and prototypes to capture the multilevel representational structure underlying the multiview data in the latent space. In addition, we learn consistent cluster assignments for MVC by adopting a DSL method to associate different level structural representations. The evaluation experiment showed that MCMC can achieve intracluster compactness, intercluster separability, and higher accuracy (ACC) in clustering performance. Our code is available at https://github.com/bianjt-morning/MCMC.
AB - Multiview clustering (MVC) aims to integrate multiple related but different views of data to achieve more accurate clustering performance. Contrastive learning has found many applications in MVC due to its successful performance in unsupervised visual representation learning. However, existing MVC methods based on contrastive learning overlook the potential of high similarity nearest neighbors as positive pairs. In addition, these methods do not capture the multilevel (i.e., cluster, instance, and prototype levels) representational structure that naturally exists in multiview datasets. These limitations could further hinder the structural compactness of learned multiview representations. To address these issues, we propose a novel end-to-end deep MVC method called multilevel contrastive MVC (MCMC) with dual self-supervised learning (DSL). Specifically, we first treat the nearest neighbors of an object from the latent subspace as the positive pairs for multiview contrastive loss, which improves the compactness of the representation at the instance level. Second, we perform multilevel contrastive learning (MCL) on clusters, instances, and prototypes to capture the multilevel representational structure underlying the multiview data in the latent space. In addition, we learn consistent cluster assignments for MVC by adopting a DSL method to associate different level structural representations. The evaluation experiment showed that MCMC can achieve intracluster compactness, intercluster separability, and higher accuracy (ACC) in clustering performance. Our code is available at https://github.com/bianjt-morning/MCMC.
KW - Contrastive learning
KW - multiview clustering (MVC)
KW - representation learning
KW - self-supervised learning
UR - http://www.scopus.com/inward/record.url?scp=105002745374&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2025.3552969
DO - 10.1109/TNNLS.2025.3552969
M3 - 文章
AN - SCOPUS:105002745374
SN - 2162-237X
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
ER -