Saliency based proposal refinement in robotic vision

Lu Chen, Panfeng Huang, Zhou Zhao

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

2 Scopus citations

Abstract

Detecting object grasps from the given image has attracted lots of research concerns in the field of robotic vision. Despite many solutions have been proposed, they tend to simply focus on the detection problem and strongly assume that the object has been placed in the ideal viewing position. In this paper, we propose to refine object proposal based on the saliency measurement. It can be used to refine the object detection results and further guides the self-movement of robotic arm to achieve a better grasping state. First, we dilate the inaccurate proposal to cover more object regions and extract object using saliency-like evaluation measurement. Then, we use superpixel-based sliding windows with various scales and aspect ratios to localize region with highest response. Compared with traditionally exhaustive sliding search, our method reduces the number of sliding windows and hence runs faster. Experiments on public dataset and real test both verify the effectiveness of our proposal method.

Original languageEnglish
Title of host publication2017 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2017
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages85-90
Number of pages6
ISBN (Electronic)9781538620342
DOIs
StatePublished - 2 Jul 2017
Event2017 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2017 - Okinawa, Japan
Duration: 14 Jul 201718 Jul 2017

Publication series

Name2017 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2017
Volume2017-July

Conference

Conference2017 IEEE International Conference on Real-Time Computing and Robotics, RCAR 2017
Country/TerritoryJapan
CityOkinawa
Period14/07/1718/07/17

Fingerprint

Dive into the research topics of 'Saliency based proposal refinement in robotic vision'. Together they form a unique fingerprint.

Cite this