Modeling of Multiple Spatial-Temporal Relations for Robust Visual Object Tracking

Shilei Wang, Zhenhua Wang, Qianqian Sun, Gong Cheng, Jifeng Ning

Research output: Contribution to journalArticlepeer-review

2 Scopus citations

Abstract

Recently, one-stream trackers have achieved parallel feature extraction and relation modeling through the exploitation of Transformer-based architectures. This design greatly improves the performance of trackers. However, as one-stream trackers often overlook crucial tracking cues beyond the template, they prone to give unsatisfactory results against complex tracking scenarios. To tackle these challenges, we propose a multi-cue single-stream tracker, dubbed MCTrack here, which seamlessly integrates template information, historical trajectory, historical frame, and the search region for synchronized feature extraction and relation modeling. To achieve this, we employ two types of encoders to convert the template, historical frames, search region, and historical trajectory into tokens, which are then collectively fed into a Transformer architecture. To distill temporal and spatial cues, we introduce a novel adaptive update mechanism, which incorporates a thresholding component and a local multi-peak component to filter out less accurate and overly disturbed tracking cues. Empirically, MCTrack achieves leading performance on mainstream benchmark datasets, surpassing the most advanced SeqTrack by 2.0% in terms of the AO metric on GOT-10k. The code is available at https://github.com/wsumel/MCTrack.

Original languageEnglish
Pages (from-to)5073-5085
Number of pages13
JournalIEEE Transactions on Image Processing
Volume33
DOIs
StatePublished - 2024

Keywords

  • adaptive update
  • spatial-temporal modeling
  • transformer
  • Visual object tracking

Fingerprint

Dive into the research topics of 'Modeling of Multiple Spatial-Temporal Relations for Robust Visual Object Tracking'. Together they form a unique fingerprint.

Cite this