Towards Robust Discriminative Projections Learning via Non-Greedy ℓ2,1-Norm MinMax

科研成果: 期刊稿件文章同行评审

82 引用 (Scopus)

摘要

Linear Discriminant Analysis (LDA) is one of the most successful supervised dimensionality reduction methods and has been widely used in many real-world applications. However, ℓ 2ℓ2-norm is employed as the distance metric in the objective of LDA, which is sensitive to outliers. Many previous works improve the robustness of LDA by using ℓ1ℓ1-norm distance. However, the robustness against outliers is limited and the solver of ℓ1ℓ1-norm is mostly based on the greedy search strategy, which is time-consuming and easy to get stuck in a local optimum. In this paper, we propose a novel robust LDA measured by ℓ2,1 ℓ2,1-norm to learn robust discriminative projections. The proposed model is challenging to solve since it needs to minimize and maximize (minmax) ℓ2,1 ℓ2,1-norm terms simultaneously. As a result, we first systematically derive an efficient iterative optimization algorithm to solve a general ratio minimization problem, and then rigorously prove its convergence. More importantly, an alternately non-greedy iterative re-weighted optimization algorithm is developed based on the preceding approach for solving proposed ℓ2,1ℓ2,1-norm minmax problem. Besides, an optimal weighted mean mechanism is driven according to the designed objective and solver, which can be applied to other approaches for robustness improvement. Experimental results on several real-world datasets show the effectiveness of proposed method.

源语言英语
文章编号8941268
页(从-至)2086-2100
页数15
期刊IEEE Transactions on Pattern Analysis and Machine Intelligence
43
6
DOI
出版状态已出版 - 1 6月 2021

指纹

探究 'Towards Robust Discriminative Projections Learning via Non-Greedy ℓ2,1-Norm MinMax' 的科研主题。它们共同构成独一无二的指纹。

引用此