LINR: A Plug-and-Play Local Implicit Neural Representation Module for Visual Object Tracking

Yao Chen, Guancheng Jia, Yufei Zha, Peng Zhang, Yanning Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

Current one-stream trackers suffer from limitations in distinguishing targets from complex backgrounds owing to their uniform token division strategy. By treating all regions equally, these methods allocate inadequate attention to crucial target details while overemphasizing redundant background information. Consequently, their performance deteriorates significantly in scenarios involving similar distractors or background clutter. In this work, we propose a Local Implicit Neural Representation (LINR) module specifically designed for local fine-grained object modeling. It consists of two key modules: (1) Local Window Selection: Leveraging template-guided CNN-based cross-correlation, it accurately identify crucial target-relevant regions, reducing background information redundant and computation burden. (2) INR-based Window Refinement: Using implicit neural networks, it optimizes token density and spatial continuity to improve local fine-grained instance-level representations, facilitating the discriminative ability between the target and the background. Moreover, the LINR module exhibits three remarkable advantages as a generalized enhancement for visual tracking. Firstly, it is plug-and-play, seamlessly integrating into existing one-stream trackers, both non-real-time and real-time ones, without architectural modifications, achieving significant performance improvements. Secondly, it is highly portable since it does not introduce new loss functions, additional training strategies or data. Thirdly, it is efficiency-friendly, having minimal impact on model parameters and tracking speed, e.g., AQATrack-LINR increases only 1.9% of the parameters and reduces the tracking speed by only 6 fps. We incorporate the LINR module into two non-real-time trackers, OSTrack based on ViT-B and AQATrack based on HiViT-B, and one real-time tracker, FERMT based on ViT-tiny, respectively. The resultant OSTrack-LINR, AQATrack-LINR, and FERMT-LINR achieve state-of-the-art performance across seven widely utilized datasets, such as TrackingNet, LaSOT, and NFS30.

Keywords

  • Fine-grained object modeling
  • Implicit neural representation
  • Visual object tracking

Cite this