摘要
Diagonal principal component analysis (DiaPCA) is an important method for dimensionality reduction and feature extraction. It usually makes use of the ℓ2-norm criterion for optimization, and is thus sensitive to outliers. In this paper, we present a DiaPCA with non-greedy ℓ1-norm maximization (DiaPCA-L1 non-greedy), which is more robust to outliers. Experimental results on two benchmark datasets show the effectiveness and advantages of our proposed method.
源语言 | 英语 |
---|---|
页(从-至) | 57-62 |
页数 | 6 |
期刊 | Neurocomputing |
卷 | 171 |
DOI | |
出版状态 | 已出版 - 1 1月 2016 |
已对外发布 | 是 |