TY - JOUR
T1 - Generalized KPCA by adaptive rules in feature space
AU - Pang, Yanwei
AU - Wang, Lei
AU - Yuan, Yuan
PY - 2010/4
Y1 - 2010/4
N2 - Principal component analysis (PCA) is well recognized in dimensionality reduction, and kernel PCA (KPCA) has also been proposed in statistical data analysis. However, KPCA fails to detect the nonlinear structure of data well when outliers exist. To reduce this problem, this paper presents a novel algorithm, named iterative robust KPCA (IRKPCA). IRKPCA works well in dealing with outliers, and can be carried out in an iterative manner, which makes it suitable to process incremental input data. As in the traditional robust PCA (RPCA), a binary field is employed for characterizing the outlier process, and the optimization problem is formulated as maximizing marginal distribution of a Gibbs distribution. In this paper, this optimization problem is solved by stochastic gradient descent techniques. In IRKPCA, the outlier process is in a high-dimensional feature space, and therefore kernel trick is used. IRKPCA can be regarded as a kernelized version of RPCA and a robust form of kernel Hebbian algorithm. Experimental results on synthetic data demonstrate the effectiveness of IRKPCA.
AB - Principal component analysis (PCA) is well recognized in dimensionality reduction, and kernel PCA (KPCA) has also been proposed in statistical data analysis. However, KPCA fails to detect the nonlinear structure of data well when outliers exist. To reduce this problem, this paper presents a novel algorithm, named iterative robust KPCA (IRKPCA). IRKPCA works well in dealing with outliers, and can be carried out in an iterative manner, which makes it suitable to process incremental input data. As in the traditional robust PCA (RPCA), a binary field is employed for characterizing the outlier process, and the optimization problem is formulated as maximizing marginal distribution of a Gibbs distribution. In this paper, this optimization problem is solved by stochastic gradient descent techniques. In IRKPCA, the outlier process is in a high-dimensional feature space, and therefore kernel trick is used. IRKPCA can be regarded as a kernelized version of RPCA and a robust form of kernel Hebbian algorithm. Experimental results on synthetic data demonstrate the effectiveness of IRKPCA.
KW - Dimension reduction
KW - Feature extraction
KW - Iterative robust KPCA
KW - KPCA
KW - Outliers
KW - PCA
UR - http://www.scopus.com/inward/record.url?scp=77952580506&partnerID=8YFLogxK
U2 - 10.1080/00207160802044118
DO - 10.1080/00207160802044118
M3 - 文章
AN - SCOPUS:77952580506
SN - 0020-7160
VL - 87
SP - 956
EP - 968
JO - International Journal of Computer Mathematics
JF - International Journal of Computer Mathematics
IS - 5
ER -