TY - JOUR
T1 - Improving visible-thermal ReID with structural common space embedding and part models
AU - Ran, Lingyan
AU - Hong, Yujun
AU - Zhang, Shizhou
AU - Yang, Yifei
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 2020
PY - 2021/2
Y1 - 2021/2
N2 - With the emergence of large-scale datasets and deep learning systems, person re-identification(Re-ID) has made many significant breakthroughs. Meanwhile, Visible-Thermal person re-identification(V-T Re-ID) between visible and thermal images has also received ever-increasing attention. However, most of typical visible-visible person re-identification(V-V Re-ID) algorithms are difficult to be directly applied to the task of V-T Re-ID, due to the large cross-modality intra-class and inter-class variation. In this paper, we build an end-to-end dual-path spatial-structure-preserving common space network to transfer some V-V Re-ID methods to V-T Re-ID domain effectively. The framework mainly consists of two parts: a modility specific feature embedding network and a common feature space. Benefiting from the common space, our framework can abstract attentive common information by learning local feature representations for V-T Re-ID. We conduct extensive experiments on the publicly available RGB-IR re-ID benchmark datasets, SYSUMM01 and RegDB, for demonstration of the effectiveness of bridging the gap between V-V Re-ID and V-T Re-ID. Experimental results achieves the state-of-the-art performance.
AB - With the emergence of large-scale datasets and deep learning systems, person re-identification(Re-ID) has made many significant breakthroughs. Meanwhile, Visible-Thermal person re-identification(V-T Re-ID) between visible and thermal images has also received ever-increasing attention. However, most of typical visible-visible person re-identification(V-V Re-ID) algorithms are difficult to be directly applied to the task of V-T Re-ID, due to the large cross-modality intra-class and inter-class variation. In this paper, we build an end-to-end dual-path spatial-structure-preserving common space network to transfer some V-V Re-ID methods to V-T Re-ID domain effectively. The framework mainly consists of two parts: a modility specific feature embedding network and a common feature space. Benefiting from the common space, our framework can abstract attentive common information by learning local feature representations for V-T Re-ID. We conduct extensive experiments on the publicly available RGB-IR re-ID benchmark datasets, SYSUMM01 and RegDB, for demonstration of the effectiveness of bridging the gap between V-V Re-ID and V-T Re-ID. Experimental results achieves the state-of-the-art performance.
KW - Feature representation learning
KW - Spatial-structure-preserving feature embedding
KW - Visible-thermal person re-identification
UR - http://www.scopus.com/inward/record.url?scp=85098721208&partnerID=8YFLogxK
U2 - 10.1016/j.patrec.2020.11.020
DO - 10.1016/j.patrec.2020.11.020
M3 - 文章
AN - SCOPUS:85098721208
SN - 0167-8655
VL - 142
SP - 25
EP - 31
JO - Pattern Recognition Letters
JF - Pattern Recognition Letters
ER -