TY - GEN
T1 - Enhancing Visible-Infrared Person Re-identification with Modality- and Instance-aware Visual Prompt Learning
AU - Wu, Ruiqi
AU - Jiao, Bingliang
AU - Wang, Wenxuan
AU - Liu, Meng
AU - Wang, Peng
N1 - Publisher Copyright:
© 2024 Copyright held by the owner/author(s).
PY - 2024/6/7
Y1 - 2024/6/7
N2 - The Visible-Infrared Person Re-identification (VI ReID) aims to match visible and infrared images of the same pedestrians across non-overlapped camera views. These two input modalities contain both invariant information, such as shape, and modality-specific details, such as color. An ideal model should utilize valuable information from both modalities during training for enhanced representational capability. However, the gap caused by modality-specific information poses substantial challenges for the VI ReID model to handle distinct modality inputs simultaneously. To address this, we introduce the Modality-aware and Instance-aware Visual Prompts (MIP) network in our work, designed to effectively utilize both invariant and specific information for identification. Specifically, our MIP model is built on the transformer architecture. In this model, we have designed a series of modality-specific prompts, which could enable our model to adapt to and make use of the specific information inherent in different modality inputs, thereby reducing the interference caused by the modality gap and achieving better identification. Besides, we also employ each pedestrian feature to construct a group of instance-specific prompts. These customized prompts are responsible for guiding our model to adapt to each pedestrian instance dynamically, thereby capturing identity-level discriminative clues for identification. Through extensive experiments on SYSU-MM01 and RegDB datasets, the effectiveness of both our designed modules is evaluated. Additionally, our proposed MIP performs better than most state-of-the-art methods.
AB - The Visible-Infrared Person Re-identification (VI ReID) aims to match visible and infrared images of the same pedestrians across non-overlapped camera views. These two input modalities contain both invariant information, such as shape, and modality-specific details, such as color. An ideal model should utilize valuable information from both modalities during training for enhanced representational capability. However, the gap caused by modality-specific information poses substantial challenges for the VI ReID model to handle distinct modality inputs simultaneously. To address this, we introduce the Modality-aware and Instance-aware Visual Prompts (MIP) network in our work, designed to effectively utilize both invariant and specific information for identification. Specifically, our MIP model is built on the transformer architecture. In this model, we have designed a series of modality-specific prompts, which could enable our model to adapt to and make use of the specific information inherent in different modality inputs, thereby reducing the interference caused by the modality gap and achieving better identification. Besides, we also employ each pedestrian feature to construct a group of instance-specific prompts. These customized prompts are responsible for guiding our model to adapt to each pedestrian instance dynamically, thereby capturing identity-level discriminative clues for identification. Through extensive experiments on SYSU-MM01 and RegDB datasets, the effectiveness of both our designed modules is evaluated. Additionally, our proposed MIP performs better than most state-of-the-art methods.
KW - Cross-Modality Person Re-Identification
KW - Visible-Infrared Person Re-Identification
KW - Visual Prompt Learning
UR - https://www.scopus.com/pages/publications/85199166955
U2 - 10.1145/3652583.3658109
DO - 10.1145/3652583.3658109
M3 - 会议稿件
AN - SCOPUS:85199166955
T3 - ICMR 2024 - Proceedings of the 2024 International Conference on Multimedia Retrieval
SP - 579
EP - 588
BT - ICMR 2024-Proceedings of the 14th Annual ACM International Conference on Multimedia Retrieval
PB - Association for Computing Machinery, Inc
T2 - 14th Annual ACM International Conference on Multimedia Retrieval, ICMR 2024
Y2 - 10 June 2024 through 14 June 2024
ER -