An adaptive self-correction joint training framework for person re-identification with noisy labels

Hui Fu, Ke Zhang, Jingyu Wang

科研成果: 期刊稿件文章同行评审

9 引用 (Scopus)

摘要

Current person re-identification (ReID) methods heavily rely on well-annotated training data, and their performance suffers from significant degradation in the presence of noisy labels that are ubiquitous in real-life scenes. The reason is that noisy labels not only affect the prediction results of the classifier, but also impede feature refinement, making it difficult to distinguish between different person features. To address these issues, we propose an Adaptive Self-correction Classification (ASC) loss and an Adaptive Margin Self-correction Triplet (AMSTri) loss. Specifically, ASC loss helps the network to produce better predictions by balancing annotations and prediction labels, and pays more attention to the minority samples with the help of a focusing factor. On the other hand, the AMSTri loss introduces an adaptive margin that varies with sample features to accommodate complex data variations, and utilizes predicted labels to generate reliable triples for feature refinement. We then present an end-to-end adaptive self-correction joint training framework incorporating ASC loss and AMSTri loss to achieve a robust ReID model. Our comprehensive experiments demonstrate that the proposed framework outperforms most existing counterparts.

源语言英语
文章编号121771
期刊Expert Systems with Applications
238
DOI
出版状态已出版 - 15 3月 2024

指纹

探究 'An adaptive self-correction joint training framework for person re-identification with noisy labels' 的科研主题。它们共同构成独一无二的指纹。

引用此