TY - GEN
T1 - Learning robust locality preserving projection via p-order minimization
AU - Wang, Hua
AU - Nie, Feiping
AU - Huang, Heng
N1 - Publisher Copyright:
Copyright © 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
PY - 2015/6/1
Y1 - 2015/6/1
N2 - Locality preserving projection (LPP) is an effective dimensionality reduction method based on manifold learning, which is defined over the graph weighted squared l2-norm distances in the projected subspace. Since squared l2-norm distance is prone to outliers, it is desirable to develop a robust LPP method. In this paper, motivated by existing studies that improve the robustness of statistical learning models via l2-norm or not-squared l2-norm formulations, we propose a robust LPP (rLPP) formulation to minimize the p-th order of the ¿2-norm distances, which can better tolerate large outlying data samples because it suppress the introduced biased more than the l2-norm or not squared l2-norm minimizations. However, solving the formulated objective is very challenging because it not only non-smooth but also non-convex. As an important theoretical contribution of this work, we systematically derive an efficient iterative algorithm to solve the general p-th order l2-norm minimization problem, which, to the best of our knowledge, is solved for the first time in literature. Extensive empirical evaluations on the proposed rLPP method have been performed, in which our new method outperforms the related state-of-the-art methods in a variety of experimental settings and demonstrate its effectiveness in seeking better subspaces on both noiseless and noisy data.
AB - Locality preserving projection (LPP) is an effective dimensionality reduction method based on manifold learning, which is defined over the graph weighted squared l2-norm distances in the projected subspace. Since squared l2-norm distance is prone to outliers, it is desirable to develop a robust LPP method. In this paper, motivated by existing studies that improve the robustness of statistical learning models via l2-norm or not-squared l2-norm formulations, we propose a robust LPP (rLPP) formulation to minimize the p-th order of the ¿2-norm distances, which can better tolerate large outlying data samples because it suppress the introduced biased more than the l2-norm or not squared l2-norm minimizations. However, solving the formulated objective is very challenging because it not only non-smooth but also non-convex. As an important theoretical contribution of this work, we systematically derive an efficient iterative algorithm to solve the general p-th order l2-norm minimization problem, which, to the best of our knowledge, is solved for the first time in literature. Extensive empirical evaluations on the proposed rLPP method have been performed, in which our new method outperforms the related state-of-the-art methods in a variety of experimental settings and demonstrate its effectiveness in seeking better subspaces on both noiseless and noisy data.
UR - http://www.scopus.com/inward/record.url?scp=84960079432&partnerID=8YFLogxK
M3 - 会议稿件
AN - SCOPUS:84960079432
T3 - Proceedings of the National Conference on Artificial Intelligence
SP - 3059
EP - 3065
BT - Proceedings of the 29th AAAI Conference on Artificial Intelligence, AAAI 2015 and the 27th Innovative Applications of Artificial Intelligence Conference, IAAI 2015
PB - AI Access Foundation
T2 - 29th AAAI Conference on Artificial Intelligence, AAAI 2015 and the 27th Innovative Applications of Artificial Intelligence Conference, IAAI 2015
Y2 - 25 January 2015 through 30 January 2015
ER -