Improving visible-thermal ReID with structural common space embedding and part models

Lingyan Ran, Yujun Hong, Shizhou Zhang, Yifei Yang, Yanning Zhang

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

With the emergence of large-scale datasets and deep learning systems, person re-identification(Re-ID) has made many significant breakthroughs. Meanwhile, Visible-Thermal person re-identification(V-T Re-ID) between visible and thermal images has also received ever-increasing attention. However, most of typical visible-visible person re-identification(V-V Re-ID) algorithms are difficult to be directly applied to the task of V-T Re-ID, due to the large cross-modality intra-class and inter-class variation. In this paper, we build an end-to-end dual-path spatial-structure-preserving common space network to transfer some V-V Re-ID methods to V-T Re-ID domain effectively. The framework mainly consists of two parts: a modility specific feature embedding network and a common feature space. Benefiting from the common space, our framework can abstract attentive common information by learning local feature representations for V-T Re-ID. We conduct extensive experiments on the publicly available RGB-IR re-ID benchmark datasets, SYSUMM01 and RegDB, for demonstration of the effectiveness of bridging the gap between V-V Re-ID and V-T Re-ID. Experimental results achieves the state-of-the-art performance.

Original languageEnglish
Pages (from-to)25-31
Number of pages7
JournalPattern Recognition Letters
Volume142
DOIs
StatePublished - Feb 2021

Keywords

  • Feature representation learning
  • Spatial-structure-preserving feature embedding
  • Visible-thermal person re-identification

Fingerprint

Dive into the research topics of 'Improving visible-thermal ReID with structural common space embedding and part models'. Together they form a unique fingerprint.

Cite this