TY - GEN
T1 - Pluggable Weakly-Supervised Cross-View Learning for Accurate Vehicle Re-Identification
AU - Yang, Lu
AU - Liu, Hongbang
AU - Liu, Lingqiao
AU - Zhou, Jinghao
AU - Zhang, Lei
AU - Wang, Peng
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 2022 ACM.
PY - 2022/6/27
Y1 - 2022/6/27
N2 - Learning cross-view consistent feature representation is the key for accurate vehicle Re-identification (ReID), since the visual appearance of vehicles changes significantly under different viewpoints. To this end, many existing approaches resort to the supervised cross-view learning using extensive extra viewpoints annotations, which however, is difficult to deploy in real applications due to the expensive labelling cost and the continous viewpoint variation that makes it hard to define discrete viewpoint labels. In this study, we present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID. Through hallucinating the cross-view samples as the hardest positive counterparts with small luminance difference and large local feature variance, we can learn the consistent feature representation via minimizing the cross-view feature distance based on vehicle IDs only without using any viewpoint annotation. More importantly, the proposed method can be seamlessly plugged into most existing vehicle ReID baselines for cross-view learning without re-training the baselines. To demonstrate its efficacy, we plug the proposed method into a bunch of off-the-shelf baselines and obtain significant performance improvement on four public benchmark datasets, i.e., VeRi-776, VehicleID, VRIC and VRAI.
AB - Learning cross-view consistent feature representation is the key for accurate vehicle Re-identification (ReID), since the visual appearance of vehicles changes significantly under different viewpoints. To this end, many existing approaches resort to the supervised cross-view learning using extensive extra viewpoints annotations, which however, is difficult to deploy in real applications due to the expensive labelling cost and the continous viewpoint variation that makes it hard to define discrete viewpoint labels. In this study, we present a pluggable Weakly-supervised Cross-View Learning (WCVL) module for vehicle ReID. Through hallucinating the cross-view samples as the hardest positive counterparts with small luminance difference and large local feature variance, we can learn the consistent feature representation via minimizing the cross-view feature distance based on vehicle IDs only without using any viewpoint annotation. More importantly, the proposed method can be seamlessly plugged into most existing vehicle ReID baselines for cross-view learning without re-training the baselines. To demonstrate its efficacy, we plug the proposed method into a bunch of off-the-shelf baselines and obtain significant performance improvement on four public benchmark datasets, i.e., VeRi-776, VehicleID, VRIC and VRAI.
KW - cross-view feature
KW - vehicle re-identification
KW - weakly supervised
UR - http://www.scopus.com/inward/record.url?scp=85134023048&partnerID=8YFLogxK
U2 - 10.1145/3512527.3531357
DO - 10.1145/3512527.3531357
M3 - 会议稿件
AN - SCOPUS:85134023048
T3 - ICMR 2022 - Proceedings of the 2022 International Conference on Multimedia Retrieval
SP - 81
EP - 89
BT - ICMR 2022 - Proceedings of the 2022 International Conference on Multimedia Retrieval
PB - Association for Computing Machinery, Inc
T2 - 2022 International Conference on Multimedia Retrieval, ICMR 2022
Y2 - 27 June 2022 through 30 June 2022
ER -