Abstract
—3-D multiobject tracking (MOT) is an important task in numerous applications, including robotics and autonomous driving. Nevertheless, existing 3-D MOT solutions suffer from significant performance degradation under adverse weather conditions. Inspired by the fact that hard objects (e.g., missed detections or wrongly-associated objects) are more constructive to performance improvement, in this article, we leverage hard samples for robust 3-D MOT in adverse weather conditions. Specifically, we implement a cross-modality 3-D MOT framework to learn the 3-D region proposals from point clouds and RGB images, respectively. To minimize the risk of missed detection and wrong association, we introduce an adaptive hard sample mining scheme to align the 3-D region proposals provided by two modalities. We quantify the hard level by comparing the confidence values of the same object in the two branches and their distance in the embedding space. Meanwhile, we dynamically adjust the weights of hard samples during training to enhance the representation learning for robust 3-D MOT. Extensive experimental results showcase that our proposed solution effectively mitigates the missed detection and reduces wrong association with good generalization.
Original language | English |
---|---|
Pages (from-to) | 25268-25282 |
Number of pages | 15 |
Journal | IEEE Internet of Things Journal |
Volume | 11 |
Issue number | 14 |
DOIs | |
State | Published - 2024 |
Keywords
- Adverse weather
- hard sample mining
- multimodality
- object tracking