Robust principal component analysis via optimal mean by joint ℓ2,1 and Schatten p-norms minimization

Xiaoshuang Shi, Feiping Nie, Zhihui Lai, Zhenhua Guo

科研成果: 期刊稿件文章同行评审

43 引用 (Scopus)

摘要

Since principal component analysis (PCA) is sensitive to corrupted variables or observations that affect its performance and applicability in real scenarios, some convex robust PCA methods have been developed to enhance the robustness of PCA. However, most of them neglect the optimal mean calculation problem. They center the data with the mean calculated by the ℓ2-norm, which is incorrect because the ℓ1-norm objective function is used in the following steps. In this paper, we consider a novel robust PCA method that can pursue and remove outliers, exactly recover a low-rank matrix and calculate the optimal mean. Specifically, we propose an optimization model constituted by a ℓ2,1-norm based loss function and a Schatten p-norm regularization term. The ℓ2,1-norm used in loss function aims to pursue and remove outliers, the Schatten p-norm can suppress the singular values of reconstructed data at smaller p (0 < p < 1), so it is a better approximation to the rank than the trace norm. Experimental results on benchmark databases demonstrate the effectiveness of our proposed algorithm.

源语言英语
页(从-至)205-213
页数9
期刊Neurocomputing
283
DOI
出版状态已出版 - 29 3月 2018

指纹

探究 'Robust principal component analysis via optimal mean by joint ℓ2,1 and Schatten p-norms minimization' 的科研主题。它们共同构成独一无二的指纹。

引用此