TY - JOUR
T1 - Structure preserved fast dimensionality reduction
AU - Yi, Jihai
AU - Duan, Huiyu
AU - Wang, Jikui
AU - Yang, Zhengguo
AU - Nie, Feiping
N1 - Publisher Copyright:
© 2024 Elsevier B.V.
PY - 2024/9
Y1 - 2024/9
N2 - Many graph-based unsupervised dimensionality reduction techniques have raised concerns about their high accuracy. However, there is an urgent need to address the enormous time consumption problem in large-scale data scenarios. Therefore, we present a novel approach named Structure Preserved Fast Dimensionality Reduction (SPFDR). Firstly, the parameter-insensitive, sparse, and scalable bipartite graph is constructed to build the similarity matrix. Then, employing alternating iterative optimization, the linear dimensionality reduction matrix and the optimal similarity matrix preserved cluster structure are learned. The computational complexity of the conventional graph-based dimension reduction method costs O(n2d+d3), yet the proposed approach is O(ndm+nm2), wherein n, m, and d are the number of instances, anchors, and features, respectively. Eventually, experiments conducted with multiple open datasets will provide convincing evidence for how effective and efficient the proposed method is.
AB - Many graph-based unsupervised dimensionality reduction techniques have raised concerns about their high accuracy. However, there is an urgent need to address the enormous time consumption problem in large-scale data scenarios. Therefore, we present a novel approach named Structure Preserved Fast Dimensionality Reduction (SPFDR). Firstly, the parameter-insensitive, sparse, and scalable bipartite graph is constructed to build the similarity matrix. Then, employing alternating iterative optimization, the linear dimensionality reduction matrix and the optimal similarity matrix preserved cluster structure are learned. The computational complexity of the conventional graph-based dimension reduction method costs O(n2d+d3), yet the proposed approach is O(ndm+nm2), wherein n, m, and d are the number of instances, anchors, and features, respectively. Eventually, experiments conducted with multiple open datasets will provide convincing evidence for how effective and efficient the proposed method is.
KW - Bipartite graph
KW - Dimensionality reduction
KW - Large-scale data
KW - Unsupervised learning
UR - http://www.scopus.com/inward/record.url?scp=85196272529&partnerID=8YFLogxK
U2 - 10.1016/j.asoc.2024.111817
DO - 10.1016/j.asoc.2024.111817
M3 - 文章
AN - SCOPUS:85196272529
SN - 1568-4946
VL - 162
JO - Applied Soft Computing
JF - Applied Soft Computing
M1 - 111817
ER -