Convex sparse PCA for unsupervised feature learning

Xiaojun Chang, Feiping Nie, Yi Yang, Chengqi Zhang, Heng Huang

科研成果: 期刊稿件文章同行评审

90 引用 (Scopus)

摘要

Principal component analysis (PCA) has been widely applied to dimensionality reduction and data preprocessing for different applications in engineering, biology, social science, and the like. Classical PCA and its variants seek for linear projections of the original variables to obtain the low-dimensional feature representations with maximal variance. One limitation is that it is difficult to interpret the results of PCA. Besides, the classical PCA is vulnerable to certain noisy data. In this paper, we propose a Convex Sparse Principal Component Analysis (CSPCA) algorithm and apply it to feature learning. First, we show that PCA can be formulated as a low-rank regression optimization problem. Based on the discussion, the l2,1-norm minimization is incorporated into the objective function to make the regression coefficients sparse, thereby robust to the outliers. Also, based on the sparse model used in CSPCA, an optimal weight is assigned to each of the original feature, which in turn provides the output with good interpretability. With the output of our CSPCA, we can effectively analyze the importance of each feature under the PCA criteria. Our new objective function is convex, and we propose an iterative algorithm to optimize it. We apply the CSPCA algorithm to feature selection and conduct extensive experiments on seven benchmark datasets. Experimental results demonstrate that the proposed algorithm outperforms state-of-the-art unsupervised feature selection algorithms.

源语言英语
文章编号3
期刊ACM Transactions on Knowledge Discovery from Data
11
1
DOI
出版状态已出版 - 7月 2016
已对外发布

指纹

探究 'Convex sparse PCA for unsupervised feature learning' 的科研主题。它们共同构成独一无二的指纹。

引用此