A Context-Free Method for Robust Grasp Detection: Learning to Overcome Contextual Bias

Yuanhao Li, Panfeng Huang, Zhiqiang Ma, Lu Chen

Research output: Contribution to journalArticlepeer-review

10 Scopus citations

Abstract

The vision-based grasp detection method is an important technique used to research the grasping task of robots. Unfortunately, the performances of these research methods in practice are worse than the state-of-the-art accuracy on public datasets, because shifts in data distribution owing to real-world conditions are common in the real world and neural network-based methods are sensitive to small data changes. These disturbances mainly change the image texture, causing the performance of grasp detection methods to decline sharply. However, the evaluation metric of existing models does not reflect the actual robustness of the method. Therefore, we propose a new solution. First, referring to the existing research on image classification methods, we propose a benchmark to verify the realistic robustness of the grasp detection model. Second, to improve the robustness of the model, we randomly transfer texture knowledge from other images to provide variable texture information for network training. This forces the model to rely more on the contour features of the object than on the texture when making decisions; we call this approach 'context-free.' We verify the effectiveness of our method for robustness enhancement in various grasp tasks and test the proposed method in a real robot grasping scene.

Original languageEnglish
Pages (from-to)13121-13130
Number of pages10
JournalIEEE Transactions on Industrial Electronics
Volume69
Issue number12
DOIs
StatePublished - 1 Dec 2022

Keywords

  • Context-free
  • deep neural network
  • grasp robustness
  • robotic grasp detection

Fingerprint

Dive into the research topics of 'A Context-Free Method for Robust Grasp Detection: Learning to Overcome Contextual Bias'. Together they form a unique fingerprint.

Cite this