Attend to the Difference: Cross-Modality Person Re-Identification via Contrastive Correlation

Shizhou Zhang, Yifei Yang, Peng Wang, Guoqiang Liang, Xiuwei Zhang, Yanning Zhang

Research output: Contribution to journalArticlepeer-review

40 Scopus citations

Abstract

The problem of cross-modality person re-identification has been receiving increasing attention recently, due to its practical significance. Motivated by the fact that human usually attend to the difference when they compare two similar objects, we propose a dual-path cross-modality feature learning framework which preserves intrinsic spatial structures and attends to the difference of input cross-modality image pairs. Our framework is composed by two main components: a Dual-path Spatial-structure-preserving Common Space Network (DSCSN) and a Contrastive Correlation Network (CCN). The former embeds cross-modality images into a common 3D tensor space without losing spatial structures, while the latter extracts contrastive features by dynamically comparing input image pairs. Note that the representations generated for the input RGB and Infrared images are mutually dependant to each other. We conduct extensive experiments on two public available RGB-IR ReID datasets, SYSU-MM01 and RegDB, and our proposed method outperforms state-of-the-art algorithms by a large margin with both full and simplified evaluation modes.

Original languageEnglish
Pages (from-to)8861-8872
Number of pages12
JournalIEEE Transactions on Image Processing
Volume30
DOIs
StatePublished - 2021

Keywords

  • Cross-modality
  • common space
  • contrastive correlation
  • re-identification

Fingerprint

Dive into the research topics of 'Attend to the Difference: Cross-Modality Person Re-Identification via Contrastive Correlation'. Together they form a unique fingerprint.

Cite this