Discrete Robust Principal Component Analysis via Binary Weights Self-Learning

Feiping Nie, Sisi Wang, Zheng Wang, Rong Wang, Xuelong Li

科研成果: 期刊稿件文章同行评审

9 引用 (Scopus)

摘要

Principal component analysis (PCA) is a typical unsupervised dimensionality reduction algorithm, and one of its important weaknesses is that the squared ℓ2 -norm cannot overcome the influence of outliers. Existing robust PCA methods based on paradigm have the following two drawbacks. First, the objective function of PCA based on the ℓ1-norm has no rotational invariance and limited robustness to outliers, and its solution mostly uses a greedy search strategy, which is expensive. Second, the robust PCA based on the ℓ2,1-norm and the ℓ2,p-norm is essential to learn probability weights for data, which only weakens the influence of outliers on the learning projection matrix and cannot be completely eliminated. Moreover, the ability to detect anomalies is also very poor. To solve these problems, we propose a novel discrete robust principal component analysis (DRPCA). Through self-learning binary weights, the influence of outliers on the projection matrix and data center estimation can be completely eliminated, and anomaly detection can be directly performed. In addition, an alternating iterative optimization algorithm is designed to solve the proposed problem and realize the automatic update of binary weights. Finally, our proposed model is successfully applied to anomaly detection applications, and experimental results demonstrate that the superiority of our proposed method compared with the state-of-the-art methods.

源语言英语
页(从-至)9064-9077
页数14
期刊IEEE Transactions on Neural Networks and Learning Systems
34
11
DOI
出版状态已出版 - 1 11月 2023

指纹

探究 'Discrete Robust Principal Component Analysis via Binary Weights Self-Learning' 的科研主题。它们共同构成独一无二的指纹。

引用此