TY - JOUR
T1 - Enhanced Sparsity Prior Model for Low-Rank Tensor Completion
AU - Xue, Jize
AU - Zhao, Yongqiang
AU - Liao, Wenzhi
AU - Chan, Jonathan Cheung Wai
AU - Kong, Seong G.
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2020/11
Y1 - 2020/11
N2 - Conventional tensor completion (TC) methods generally assume that the sparsity of tensor-valued data lies in the global subspace. The so-called global sparsity prior is measured by the tensor nuclear norm. Such assumption is not reliable in recovering low-rank (LR) tensor data, especially when considerable elements of data are missing. To mitigate this weakness, this article presents an enhanced sparsity prior model for LRTC using both local and global sparsity information in a latent LR tensor. In specific, we adopt a doubly weighted strategy for nuclear norm along each mode to characterize global sparsity prior of tensor. Different from traditional tensor-based local sparsity description, the proposed factor gradient sparsity prior in the Tucker decomposition model describes the underlying subspace local smoothness in real-world tensor objects, which simultaneously characterizes local piecewise structure over all dimensions. Moreover, there is no need to minimize the rank of a tensor for the proposed local sparsity prior. Extensive experiments on synthetic data, real-world hyperspectral images, and face modeling data demonstrate that the proposed model outperforms state-of-the-art techniques in terms of prediction capability and efficiency.
AB - Conventional tensor completion (TC) methods generally assume that the sparsity of tensor-valued data lies in the global subspace. The so-called global sparsity prior is measured by the tensor nuclear norm. Such assumption is not reliable in recovering low-rank (LR) tensor data, especially when considerable elements of data are missing. To mitigate this weakness, this article presents an enhanced sparsity prior model for LRTC using both local and global sparsity information in a latent LR tensor. In specific, we adopt a doubly weighted strategy for nuclear norm along each mode to characterize global sparsity prior of tensor. Different from traditional tensor-based local sparsity description, the proposed factor gradient sparsity prior in the Tucker decomposition model describes the underlying subspace local smoothness in real-world tensor objects, which simultaneously characterizes local piecewise structure over all dimensions. Moreover, there is no need to minimize the rank of a tensor for the proposed local sparsity prior. Extensive experiments on synthetic data, real-world hyperspectral images, and face modeling data demonstrate that the proposed model outperforms state-of-the-art techniques in terms of prediction capability and efficiency.
KW - Enhanced sparsity prior
KW - factor gradient sparsity
KW - global and local sparsities
KW - low-rank (LR) tensor completion (TC)
KW - Tucker decomposition
UR - http://www.scopus.com/inward/record.url?scp=85077249763&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2019.2956153
DO - 10.1109/TNNLS.2019.2956153
M3 - 文章
C2 - 31880566
AN - SCOPUS:85077249763
SN - 2162-237X
VL - 31
SP - 4567
EP - 4581
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 11
M1 - 8941238
ER -