Fast algorithm about kernel fisher discriminant analysis

Feng Zhao, Jun Ying Zhang, Jun Li Liang

Research output: Contribution to journalArticlepeer-review

1 Scopus citations

Abstract

The standard Kernel Fisher Discriminant Analysis (KFDA) may suffer from the large computation complexity and the slow speed of feature extraction for the case of large number of training samples. To tackle these problems, a fast algorithm of KFDA is presented. The algorithm firstly proposes an optimized algorithm based on the theory of linear correlation, which finds out a basis of the sub-space spanned by the training samples mapped onto the feature space and which avoids the operation of matrix inversion; Then using the linear combination of the basis to express the optimal projection vectors, and combining with Fisher criterion in the feature space, a novel criterion for the computation of the optimal projection vectors is presented, which only needs to calculate the eigenvalue of a matrix which size is the same as the number of the basis. In addition, the feature extraction for one sample only needs to calculate the kernel functions between the basis and the sample. The experimental results using different datasets demonstrate the validity of the presented algorithm.

Original languageEnglish
Pages (from-to)1731-1734
Number of pages4
JournalDianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology
Volume29
Issue number7
StatePublished - Jul 2007
Externally publishedYes

Keywords

  • Kernel Fisher Discriminant Analysis (KFDA)
  • Kernel function
  • Optimal projection vector

Fingerprint

Dive into the research topics of 'Fast algorithm about kernel fisher discriminant analysis'. Together they form a unique fingerprint.

Cite this