Capped ℓp-norm linear discriminant analysis for robust projections learning

Zheng Wang, Haojie Hu, Rong Wang, Qianrong Zhang, Feiping Nie, Xuelong Li

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Linear Discriminant Analysis (LDA) is one of the most representative supervised robust dimensionality reduction methods for handling high-dimensional data. High-dimensional datasets tend to contain more outliers and other sorts of noise, whereas most of the existing LDA models incorrectly consider the arithmetic mean of samples as the optimal mean, leading to the deviation of the data mean and thus reduce the robustness of LDA. In this paper, we propose a novel robust trace ratio objective in which the calculation of the difference between sample and class mean is converted to the calculation of the difference between each pair of samples. Besides, the within-class scatter and the total scatter are measured by capped ℓp-norm. As a result, this novel reformulation can automatically avoid mean calculation and meanwhile mitigate the negative effect of outliers on the objective function. Furthermore, an iterative optimization algorithm is derived to obtain the solution of the model. Extensive experimental results on several benchmark datasets show the superior performance of the proposed method.

Original languageEnglish
Pages (from-to)399-409
Number of pages11
JournalNeurocomputing
Volume511
DOIs
StatePublished - 28 Oct 2022

Keywords

  • Capped ℓ-norm
  • Linear discriminant analysis (LDA)
  • Optimal mean
  • Robust dimensionality reduction

Fingerprint

Dive into the research topics of 'Capped ℓp-norm linear discriminant analysis for robust projections learning'. Together they form a unique fingerprint.

Cite this