Simultaneously Learning Neighborship and Projection Matrix for Supervised Dimensionality Reduction

Yanwei Pang, Bo Zhou, Feiping Nie

Research output: Contribution to journalArticlepeer-review

40 Scopus citations

Abstract

Explicitly or implicitly, most dimensionality reduction methods need to determine which samples are neighbors and the similarities between the neighbors in the original high-dimensional space. The projection matrix is then learnt on the assumption that the neighborhood information, e.g., the similarities, are known and fixed prior to learning. However, it is difficult to precisely measure the intrinsic similarities of samples in high-dimensional space because of the curse of dimensionality. Consequently, the neighbors selected according to such similarities and the projection matrix obtained according to such similarities and the corresponding neighbors might not be optimal in the sense of classification and generalization. To overcome this drawback, in this paper, we propose to let the similarities and neighbors be variables and model these in a low-dimensional space. Both the optimal similarity and projection matrix are obtained by minimizing a unified objective function. Nonnegative and sum-to-one constraints on the similarity are adopted. Instead of empirically setting the regularization parameter, we treat it as a variable to be optimized. It is interesting that the optimal regularization parameter is adaptive to the neighbors in a low-dimensional space and has an intuitive meaning. Experimental results on the YALE B, COIL-100, and MNIST data sets demonstrate the effectiveness of the proposed method.

Original languageEnglish
Pages (from-to)2779-2793
Number of pages15
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume30
Issue number9
DOIs
StatePublished - 1 Sep 2019

Fingerprint

Dive into the research topics of 'Simultaneously Learning Neighborship and Projection Matrix for Supervised Dimensionality Reduction'. Together they form a unique fingerprint.

Cite this