TY - GEN
T1 - Context-Aware Relative Distinctive Feature Learning for Person Re-identification
AU - Yang, Shan
AU - Yang, Hangyuan
AU - Pu, Yanglin
AU - Wang, Yanbin
AU - You, Zhuhong
N1 - Publisher Copyright:
© The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2024.
PY - 2024
Y1 - 2024
N2 - In the context of large-scale crowd monitoring, the presence of visually similar person significantly increases the complexity of person re-identification tasks. Predominantly, current research concentrates on two aspects: fine-grained feature learning and hard example mining. However, these approaches present noticeable shortcomings. The method of fine-grained feature learning does not sufficiently account for the relativity of distinct features, indicating that the distinguishing features used when differentiating an individual from a different person may vary. The commonly used Triplet Loss necessitates maintaining a substantial margin in the feature space among visually similar local features of different identities. This, however, contradicts the principle of visual consistency, which states that similar inputs to a neural network should yield closely aligned feature maps in the feature space. Such a contradiction may result in models grappling with fitting these samples accurately. To overcome these limitations, we propose a Context-Aware Relative Distinctive Feature Learning methodology for Person Re-Identification. Our model incorporates the Exploring Relative Discriminative Regions with Contextual Awareness Module and the Visually Consistent N-tuple Loss, each specifically designed to address the aforementioned challenges. Experimental findings from several commonly utilized person re-identification datasets support the effectiveness of our approach.
AB - In the context of large-scale crowd monitoring, the presence of visually similar person significantly increases the complexity of person re-identification tasks. Predominantly, current research concentrates on two aspects: fine-grained feature learning and hard example mining. However, these approaches present noticeable shortcomings. The method of fine-grained feature learning does not sufficiently account for the relativity of distinct features, indicating that the distinguishing features used when differentiating an individual from a different person may vary. The commonly used Triplet Loss necessitates maintaining a substantial margin in the feature space among visually similar local features of different identities. This, however, contradicts the principle of visual consistency, which states that similar inputs to a neural network should yield closely aligned feature maps in the feature space. Such a contradiction may result in models grappling with fitting these samples accurately. To overcome these limitations, we propose a Context-Aware Relative Distinctive Feature Learning methodology for Person Re-Identification. Our model incorporates the Exploring Relative Discriminative Regions with Contextual Awareness Module and the Visually Consistent N-tuple Loss, each specifically designed to address the aforementioned challenges. Experimental findings from several commonly utilized person re-identification datasets support the effectiveness of our approach.
KW - Fine-Grained Feature Learning
KW - Person Re-identification
KW - Relative Distinctive Feature Learning
UR - http://www.scopus.com/inward/record.url?scp=85201078683&partnerID=8YFLogxK
U2 - 10.1007/978-981-97-5603-2_17
DO - 10.1007/978-981-97-5603-2_17
M3 - 会议稿件
AN - SCOPUS:85201078683
SN - 9789819756025
T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
SP - 203
EP - 215
BT - Advanced Intelligent Computing Technology and Applications - 20th International Conference, ICIC 2024, Proceedings
A2 - Huang, De-Shuang
A2 - Pan, Yijie
A2 - Chen, Wei
PB - Springer Science and Business Media Deutschland GmbH
T2 - 20th International Conference on Intelligent Computing, ICIC 2024
Y2 - 5 August 2024 through 8 August 2024
ER -