Learning Cross-Attention Discriminators via Alternating Time-Space Transformers for Visual Tracking

Wuwei Wang, Ke Zhang, Yu Su, Jingyu Wang, Qi Wang

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

In the past few years, visual tracking methods with convolution neural networks (CNNs) have gained great popularity and success. However, the convolution operation of CNNs struggles to relate spatially distant information, which limits the discriminative power of trackers. Very recently, several Transformer-assisted tracking approaches have emerged to alleviate the above issue by combining CNNs with Transformers to enhance the feature representation. In contrast to the methods mentioned above, this article explores a pure Transformer-based model with a novel semi-Siamese architecture. Both the time-space self-attention module used to construct the feature extraction backbone and the cross-attention discriminator used to estimate the response map solely leverage attention without convolution. Inspired by the recent vision transformers (ViTs), we propose the multistage alternating time-space Transformers (ATSTs) to learn robust feature representation. Specifically, temporal and spatial tokens at each stage are alternately extracted and encoded by separate Transformers. Subsequently, a cross-attention discriminator is proposed to directly generate response maps of the search region without additional prediction heads or correlation filters. Experimental results show that our ATST-based model attains favorable results against state-of-the-art convolutional trackers. Moreover, it shows comparable performance with recent 'CNN + Transformer' trackers on various benchmarks while our ATST requires significantly less training data.

Original languageEnglish
Pages (from-to)15156-15169
Number of pages14
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume35
Issue number11
DOIs
StatePublished - 2024

Keywords

  • Cross-attention discriminator
  • multistage Transformers
  • spatiotemporal information
  • visual tracking

Fingerprint

Dive into the research topics of 'Learning Cross-Attention Discriminators via Alternating Time-Space Transformers for Visual Tracking'. Together they form a unique fingerprint.

Cite this