TY - JOUR
T1 - Randomized Rank-Revealing QLP for Low-Rank Matrix Decomposition
AU - Kaloorazi, Maboud F.
AU - Liu, Kai
AU - Chen, Jie
AU - De Lamare, Rodrigo C.
AU - Rahardja, Susanto
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2023
Y1 - 2023
N2 - The pivoted QLP decomposition is computed through two consecutive pivoted QR decompositions. It is an approximation to the computationally prohibitive singular value decomposition (SVD). This work is concerned with a partial QLP decomposition of matrices through the exploitation of random sampling. The method presented is tailored for low-rank matrices and called Randomized Unpivoted QLP (RU-QLP). Like pivoted QLP, RU-QLP is rank-revealing and yet it utilizes randomized column sampling and the unpivoted QR decomposition. The latter modifications allow RU-QLP to be highly scalable and parallelizable on advanced computational platforms. We provide an analysis for RU-QLP, thereby bringing insights into its characteristics and performance behavior. In particular, we derive bounds in terms of both spectral and Frobenius norms on: i) the rank-revealing property; ii) principal angles between approximate subspaces and exact singular subspaces and vectors; and iii) the errors of low-rank approximations. Effectiveness of the bounds is illustrated through numerical tests. We further use a modern, multicore machine equipped with a GPU to demonstrate the efficiency of RU-QLP. Our results show that compared to the randomized SVD, RU-QLP achieves a speedup of up to 7.1 and 8.5 times using the CPU and up to 2.3 and 5.8 times using the GPU for the decomposition of dense and sparse matrices, respectively.
AB - The pivoted QLP decomposition is computed through two consecutive pivoted QR decompositions. It is an approximation to the computationally prohibitive singular value decomposition (SVD). This work is concerned with a partial QLP decomposition of matrices through the exploitation of random sampling. The method presented is tailored for low-rank matrices and called Randomized Unpivoted QLP (RU-QLP). Like pivoted QLP, RU-QLP is rank-revealing and yet it utilizes randomized column sampling and the unpivoted QR decomposition. The latter modifications allow RU-QLP to be highly scalable and parallelizable on advanced computational platforms. We provide an analysis for RU-QLP, thereby bringing insights into its characteristics and performance behavior. In particular, we derive bounds in terms of both spectral and Frobenius norms on: i) the rank-revealing property; ii) principal angles between approximate subspaces and exact singular subspaces and vectors; and iii) the errors of low-rank approximations. Effectiveness of the bounds is illustrated through numerical tests. We further use a modern, multicore machine equipped with a GPU to demonstrate the efficiency of RU-QLP. Our results show that compared to the randomized SVD, RU-QLP achieves a speedup of up to 7.1 and 8.5 times using the CPU and up to 2.3 and 5.8 times using the GPU for the decomposition of dense and sparse matrices, respectively.
KW - Low-rank approximation
KW - matrix decomposition
KW - pivoted QLP
KW - principal angels
KW - randomized methods
KW - scalable methods
UR - http://www.scopus.com/inward/record.url?scp=85163522059&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2023.3288889
DO - 10.1109/ACCESS.2023.3288889
M3 - 文章
AN - SCOPUS:85163522059
SN - 2169-3536
VL - 11
SP - 63650
EP - 63666
JO - IEEE Access
JF - IEEE Access
ER -