Detection of Co-salient Objects by Looking Deep and Wide

Dingwen Zhang, Junwei Han, Chao Li, Jingdong Wang, Xuelong Li

Research output: Contribution to journalArticlepeer-review

304 Scopus citations

Abstract

In this paper, we propose a unified co-salient object detection framework by introducing two novel insights: (1) looking deep to transfer higher-level representations by using the convolutional neural network with additional adaptive layers could better reflect the sematic properties of the co-salient objects; (2) looking wide to take advantage of the visually similar neighbors from other image groups could effectively suppress the influence of the common background regions. The wide and deep information are explored for the object proposal windows extracted in each image. The window-level co-saliency scores are calculated by integrating the intra-image contrast, the intra-group consistency, and the inter-group separability via a principled Bayesian formulation and are then converted to the superpixel-level co-saliency maps through a foreground region agreement strategy. Comprehensive experiments on two existing and one newly established datasets have demonstrated the consistent performance gain of the proposed approach.

Original languageEnglish
Pages (from-to)215-232
Number of pages18
JournalInternational Journal of Computer Vision
Volume120
Issue number2
DOIs
StatePublished - 1 Nov 2016

Keywords

  • Bayesian framework
  • Co-saliency detection
  • Domain adaptive convolutional neural network

Fingerprint

Dive into the research topics of 'Detection of Co-salient Objects by Looking Deep and Wide'. Together they form a unique fingerprint.

Cite this