A Context-Free Method for Robust Grasp Detection: Learning to Overcome Contextual Bias

Yuanhao Li, Panfeng Huang, Zhiqiang Ma, Lu Chen

科研成果: 期刊稿件文章同行评审

10 引用 (Scopus)

摘要

The vision-based grasp detection method is an important technique used to research the grasping task of robots. Unfortunately, the performances of these research methods in practice are worse than the state-of-the-art accuracy on public datasets, because shifts in data distribution owing to real-world conditions are common in the real world and neural network-based methods are sensitive to small data changes. These disturbances mainly change the image texture, causing the performance of grasp detection methods to decline sharply. However, the evaluation metric of existing models does not reflect the actual robustness of the method. Therefore, we propose a new solution. First, referring to the existing research on image classification methods, we propose a benchmark to verify the realistic robustness of the grasp detection model. Second, to improve the robustness of the model, we randomly transfer texture knowledge from other images to provide variable texture information for network training. This forces the model to rely more on the contour features of the object than on the texture when making decisions; we call this approach 'context-free.' We verify the effectiveness of our method for robustness enhancement in various grasp tasks and test the proposed method in a real robot grasping scene.

源语言英语
页(从-至)13121-13130
页数10
期刊IEEE Transactions on Industrial Electronics
69
12
DOI
出版状态已出版 - 1 12月 2022

指纹

探究 'A Context-Free Method for Robust Grasp Detection: Learning to Overcome Contextual Bias' 的科研主题。它们共同构成独一无二的指纹。

引用此