Tag-Saliency: Combining bottom-up and top-down information for saliency detection

Guokang Zhu, Qi Wang, Yuan Yuan

科研成果: 期刊稿件文章同行评审

32 引用 (Scopus)

摘要

In the real world, people often have a habit tending to pay more attention to some things usually noteworthy, while ignore others. This phenomenon is associated with the top-down attention. Modeling this kind of attention has recently raised many interests in computer vision due to a wide range of practical applications. Majority of the existing models are based on eye-tracking or object detection. However, these methods may not apply to practical situations, because the eye movement data cannot be always recorded or there may be inscrutable objects to be handled in large-scale data sets. This paper proposes a Tag-Saliency model based on hierarchical image over-segmentation and auto-tagging, which can efficiently extract semantic information from large scale visual media data. Experimental results on a very challenging data set show that, the proposed Tag-Saliency model has the ability to locate the truly salient regions in a greater probability than other competitors.

源语言英语
页(从-至)40-49
页数10
期刊Computer Vision and Image Understanding
118
DOI
出版状态已出版 - 1月 2014
已对外发布

指纹

探究 'Tag-Saliency: Combining bottom-up and top-down information for saliency detection' 的科研主题。它们共同构成独一无二的指纹。

引用此