Abstract
Visual saliency detection has become an active research direction in recent years. A large number of saliency models, which can automatically locate objects of interest in images, have been developed. As these models take advantage of different kinds of prior assumptions, image features, and computational methodologies, they have their own strengths and weaknesses and may cope with only one or a few types of images well. Inspired by these facts, this paper proposes a novel salient object detection approach with the idea of inferring a superior model from a variety of previous imperfect saliency models via optimally leveraging the complementary information among them. The proposed approach mainly consists of three steps. First, a number of existing unsupervised saliency models are adopted to provide weak/imperfect saliency predictions for each region in the image. Then, a fusion strategy is used to fuse each image region's weak saliency predictions into a strong one by simultaneously considering the performance differences among various weak predictions and various characteristics of different image regions. Finally, a local spatial consistency constraint that ensures high similarity of the saliency labels for neighboring image regions with similar features is proposed to refine the results. Comprehensive experiments on five public benchmark datasets and comparisons with a number of state-of-the-art approaches can demonstrate the effectiveness of the proposed work.
| Original language | English |
|---|---|
| Pages (from-to) | 1101-1112 |
| Number of pages | 12 |
| Journal | IEEE Transactions on Multimedia |
| Volume | 20 |
| Issue number | 5 |
| DOIs | |
| State | Published - May 2018 |
Keywords
- Salient object detection
- fusion strategy
- local spatial consistency constraint
- weak prediction
Fingerprint
Dive into the research topics of 'Unsupervised Salient Object Detection via Inferring from Imperfect Saliency Models'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver