Visual Tracking Based on the Adaptive Color Attention Tuned Sparse Generative Object Model

Chunna Tian, Xinbo Gao, Wei Wei, Hong Zheng

Research output: Contribution to journalArticlepeer-review

14 Scopus citations

Abstract

This paper presents a new visual tracking framework based on an adaptive color attention tuned local sparse model. The histograms of sparse coefficients of all patches in an object are pooled together according to their spatial distribution. A particle filter methodology is used as the location model to predict candidates for object verification during tracking. Since color is an important visual clue to distinguish objects from background, we calculate the color similarity between objects in the previous frames and the candidates in current frame, which is adopted as color attention to tune the local sparse representation-based appearance similarity measurement between the object template and candidates. The color similarity can be calculated efficiently with hash coded color names, which helps the tracker find more reliable objects during tracking. We use a flexible local sparse coding of the object to evaluate the degeneration degree of the appearance model, based on which we build a model updating mechanism to alleviate drifting caused by temporal varying multi-factors. Experiments on 76 challenging benchmark color sequences and the evaluation under the object tracking benchmark protocol demonstrate the superiority of the proposed tracker over the state-of-the-art methods in accuracy.

Original languageEnglish
Article number7270300
Pages (from-to)5236-5248
Number of pages13
JournalIEEE Transactions on Image Processing
Volume24
Issue number12
DOIs
StatePublished - 1 Dec 2015

Keywords

  • Adaptive color attention
  • color names
  • local sparse representation
  • visual tracking

Fingerprint

Dive into the research topics of 'Visual Tracking Based on the Adaptive Color Attention Tuned Sparse Generative Object Model'. Together they form a unique fingerprint.

Cite this