Robust visual tracking with discriminative sparse learning

Xiaoqiang Lu, Yuan Yuan, Pingkun Yan

Research output: Contribution to journalArticlepeer-review

35 Scopus citations

Abstract

Recently, sparse representation in the task of visual tracking has been obtained increasing attention and many algorithms are proposed based on it. In these algorithms for visual tracking, each candidate target is sparsely represented by a set of target templates. However, these algorithms fail to consider the structural information of the space of the target templates, i.e., target template set. In this paper, we propose an algorithm named non-local self-similarity (NLSS) based sparse coding algorithm (NLSSC) to learn the sparse representations, which considers the geometrical structure of the set of target candidates. By using non-local self-similarity (NLSS) as a smooth operator, the proposed method can turn the tracking into sparse representations problems, in which the information of the set of target candidates is exploited. Extensive experimental results on visual tracking have demonstrated the effectiveness of the proposed algorithm.

Original languageEnglish
Pages (from-to)1762-1771
Number of pages10
JournalPattern Recognition
Volume46
Issue number7
DOIs
StatePublished - Jul 2013
Externally publishedYes

Keywords

  • Non-local self-similarity
  • Particle filter
  • Sparse representation
  • Visual tracking

Fingerprint

Dive into the research topics of 'Robust visual tracking with discriminative sparse learning'. Together they form a unique fingerprint.

Cite this