Attend to the Difference: Cross-Modality Person Re-Identification via Contrastive Correlation

Shizhou Zhang, Yifei Yang, Peng Wang, Guoqiang Liang, Xiuwei Zhang, Yanning Zhang

科研成果: 期刊稿件文章同行评审

40 引用 (Scopus)

摘要

The problem of cross-modality person re-identification has been receiving increasing attention recently, due to its practical significance. Motivated by the fact that human usually attend to the difference when they compare two similar objects, we propose a dual-path cross-modality feature learning framework which preserves intrinsic spatial structures and attends to the difference of input cross-modality image pairs. Our framework is composed by two main components: a Dual-path Spatial-structure-preserving Common Space Network (DSCSN) and a Contrastive Correlation Network (CCN). The former embeds cross-modality images into a common 3D tensor space without losing spatial structures, while the latter extracts contrastive features by dynamically comparing input image pairs. Note that the representations generated for the input RGB and Infrared images are mutually dependant to each other. We conduct extensive experiments on two public available RGB-IR ReID datasets, SYSU-MM01 and RegDB, and our proposed method outperforms state-of-the-art algorithms by a large margin with both full and simplified evaluation modes.

源语言英语
页(从-至)8861-8872
页数12
期刊IEEE Transactions on Image Processing
30
DOI
出版状态已出版 - 2021

指纹

探究 'Attend to the Difference: Cross-Modality Person Re-Identification via Contrastive Correlation' 的科研主题。它们共同构成独一无二的指纹。

引用此