TY - JOUR
T1 - Towards Robust Discriminative Projections Learning via Non-Greedy ℓ2,1-Norm MinMax
AU - Nie, Feiping
AU - Wang, Zheng
AU - Wang, Rong
AU - Wang, Zhen
AU - Li, Xuelong
N1 - Publisher Copyright:
© 1979-2012 IEEE.
PY - 2021/6/1
Y1 - 2021/6/1
N2 - Linear Discriminant Analysis (LDA) is one of the most successful supervised dimensionality reduction methods and has been widely used in many real-world applications. However, ℓ 2ℓ2-norm is employed as the distance metric in the objective of LDA, which is sensitive to outliers. Many previous works improve the robustness of LDA by using ℓ1ℓ1-norm distance. However, the robustness against outliers is limited and the solver of ℓ1ℓ1-norm is mostly based on the greedy search strategy, which is time-consuming and easy to get stuck in a local optimum. In this paper, we propose a novel robust LDA measured by ℓ2,1 ℓ2,1-norm to learn robust discriminative projections. The proposed model is challenging to solve since it needs to minimize and maximize (minmax) ℓ2,1 ℓ2,1-norm terms simultaneously. As a result, we first systematically derive an efficient iterative optimization algorithm to solve a general ratio minimization problem, and then rigorously prove its convergence. More importantly, an alternately non-greedy iterative re-weighted optimization algorithm is developed based on the preceding approach for solving proposed ℓ2,1ℓ2,1-norm minmax problem. Besides, an optimal weighted mean mechanism is driven according to the designed objective and solver, which can be applied to other approaches for robustness improvement. Experimental results on several real-world datasets show the effectiveness of proposed method.
AB - Linear Discriminant Analysis (LDA) is one of the most successful supervised dimensionality reduction methods and has been widely used in many real-world applications. However, ℓ 2ℓ2-norm is employed as the distance metric in the objective of LDA, which is sensitive to outliers. Many previous works improve the robustness of LDA by using ℓ1ℓ1-norm distance. However, the robustness against outliers is limited and the solver of ℓ1ℓ1-norm is mostly based on the greedy search strategy, which is time-consuming and easy to get stuck in a local optimum. In this paper, we propose a novel robust LDA measured by ℓ2,1 ℓ2,1-norm to learn robust discriminative projections. The proposed model is challenging to solve since it needs to minimize and maximize (minmax) ℓ2,1 ℓ2,1-norm terms simultaneously. As a result, we first systematically derive an efficient iterative optimization algorithm to solve a general ratio minimization problem, and then rigorously prove its convergence. More importantly, an alternately non-greedy iterative re-weighted optimization algorithm is developed based on the preceding approach for solving proposed ℓ2,1ℓ2,1-norm minmax problem. Besides, an optimal weighted mean mechanism is driven according to the designed objective and solver, which can be applied to other approaches for robustness improvement. Experimental results on several real-world datasets show the effectiveness of proposed method.
KW - Robust dimensionality reduction
KW - non-greedy iterative re-weighted solver
KW - optimal weighted mean
KW - outlier
KW - ℓ2,1 -norm minmax problem
UR - http://www.scopus.com/inward/record.url?scp=85077284646&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2019.2961877
DO - 10.1109/TPAMI.2019.2961877
M3 - 文章
C2 - 31880539
AN - SCOPUS:85077284646
SN - 0162-8828
VL - 43
SP - 2086
EP - 2100
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 6
M1 - 8941268
ER -