Tag-Saliency: Combining bottom-up and top-down information for saliency detection

Guokang Zhu, Qi Wang, Yuan Yuan

Research output: Contribution to journalArticlepeer-review

32 Scopus citations

Abstract

In the real world, people often have a habit tending to pay more attention to some things usually noteworthy, while ignore others. This phenomenon is associated with the top-down attention. Modeling this kind of attention has recently raised many interests in computer vision due to a wide range of practical applications. Majority of the existing models are based on eye-tracking or object detection. However, these methods may not apply to practical situations, because the eye movement data cannot be always recorded or there may be inscrutable objects to be handled in large-scale data sets. This paper proposes a Tag-Saliency model based on hierarchical image over-segmentation and auto-tagging, which can efficiently extract semantic information from large scale visual media data. Experimental results on a very challenging data set show that, the proposed Tag-Saliency model has the ability to locate the truly salient regions in a greater probability than other competitors.

Original languageEnglish
Pages (from-to)40-49
Number of pages10
JournalComputer Vision and Image Understanding
Volume118
DOIs
StatePublished - Jan 2014
Externally publishedYes

Keywords

  • Computer vision
  • Image tagging
  • Saliency detection
  • Semantic
  • Visual attention
  • Visual media

Fingerprint

Dive into the research topics of 'Tag-Saliency: Combining bottom-up and top-down information for saliency detection'. Together they form a unique fingerprint.

Cite this