Simplifying Scalable Subspace Clustering and Its Multi-View Extension by Anchor-to-Sample Kernel

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

As we all known, sparse subspace learning can provide good input for spectral clustering, thereby producing high-quality cluster partitioning. However, it employs complete samples as the dictionary for representation learning, resulting in non-negligible computational costs. Therefore, replacing the complete samples with representative ones (anchors) as the dictionary has become a more popular choice, giving rise to a series of related works. Unfortunately, although these works are linear with respect to the number of samples, they are often quadratic or even cubic with respect to the number of anchors. In this paper, we derive a simpler problem to replace the original scalable subspace clustering, whose properties are utilized. This new problem is linear with respect to both the number of samples and anchors, further enhancing scalability and providing more efficient operations. Furthermore, thanks to the new problem formulation, we can adopt a separate fusion strategy for multi-view extensions. This strategy can better measure the inter-view difference and avoid alternate optimization, so as to achieve more robust and efficient multi-view clustering. Finally, comprehensive experiments demonstrate that our methods not only significantly reduce time overhead but also exhibit superior performance.

Original languageEnglish
Pages (from-to)5084-5098
Number of pages15
JournalIEEE Transactions on Image Processing
Volume34
DOIs
StatePublished - 2025

Keywords

  • Subspace clustering
  • graph learning
  • multi-view extension
  • scalable clustering
  • self-expression learning

Fingerprint

Dive into the research topics of 'Simplifying Scalable Subspace Clustering and Its Multi-View Extension by Anchor-to-Sample Kernel'. Together they form a unique fingerprint.

Cite this