TY - JOUR
T1 - Simplifying Scalable Subspace Clustering and Its Multi-View Extension by Anchor-to-Sample Kernel
AU - Lu, Zhoumin
AU - Nie, Feiping
AU - Ma, Linru
AU - Wang, Rong
AU - Li, Xuelong
N1 - Publisher Copyright:
© 1992-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - As we all known, sparse subspace learning can provide good input for spectral clustering, thereby producing high-quality cluster partitioning. However, it employs complete samples as the dictionary for representation learning, resulting in non-negligible computational costs. Therefore, replacing the complete samples with representative ones (anchors) as the dictionary has become a more popular choice, giving rise to a series of related works. Unfortunately, although these works are linear with respect to the number of samples, they are often quadratic or even cubic with respect to the number of anchors. In this paper, we derive a simpler problem to replace the original scalable subspace clustering, whose properties are utilized. This new problem is linear with respect to both the number of samples and anchors, further enhancing scalability and providing more efficient operations. Furthermore, thanks to the new problem formulation, we can adopt a separate fusion strategy for multi-view extensions. This strategy can better measure the inter-view difference and avoid alternate optimization, so as to achieve more robust and efficient multi-view clustering. Finally, comprehensive experiments demonstrate that our methods not only significantly reduce time overhead but also exhibit superior performance.
AB - As we all known, sparse subspace learning can provide good input for spectral clustering, thereby producing high-quality cluster partitioning. However, it employs complete samples as the dictionary for representation learning, resulting in non-negligible computational costs. Therefore, replacing the complete samples with representative ones (anchors) as the dictionary has become a more popular choice, giving rise to a series of related works. Unfortunately, although these works are linear with respect to the number of samples, they are often quadratic or even cubic with respect to the number of anchors. In this paper, we derive a simpler problem to replace the original scalable subspace clustering, whose properties are utilized. This new problem is linear with respect to both the number of samples and anchors, further enhancing scalability and providing more efficient operations. Furthermore, thanks to the new problem formulation, we can adopt a separate fusion strategy for multi-view extensions. This strategy can better measure the inter-view difference and avoid alternate optimization, so as to achieve more robust and efficient multi-view clustering. Finally, comprehensive experiments demonstrate that our methods not only significantly reduce time overhead but also exhibit superior performance.
KW - Subspace clustering
KW - graph learning
KW - multi-view extension
KW - scalable clustering
KW - self-expression learning
UR - https://www.scopus.com/pages/publications/105012390086
U2 - 10.1109/TIP.2025.3593057
DO - 10.1109/TIP.2025.3593057
M3 - 文章
AN - SCOPUS:105012390086
SN - 1057-7149
VL - 34
SP - 5084
EP - 5098
JO - IEEE Transactions on Image Processing
JF - IEEE Transactions on Image Processing
ER -