TY - JOUR
T1 - A Context-Free Method for Robust Grasp Detection
T2 - Learning to Overcome Contextual Bias
AU - Li, Yuanhao
AU - Huang, Panfeng
AU - Ma, Zhiqiang
AU - Chen, Lu
N1 - Publisher Copyright:
© 1982-2012 IEEE.
PY - 2022/12/1
Y1 - 2022/12/1
N2 - The vision-based grasp detection method is an important technique used to research the grasping task of robots. Unfortunately, the performances of these research methods in practice are worse than the state-of-the-art accuracy on public datasets, because shifts in data distribution owing to real-world conditions are common in the real world and neural network-based methods are sensitive to small data changes. These disturbances mainly change the image texture, causing the performance of grasp detection methods to decline sharply. However, the evaluation metric of existing models does not reflect the actual robustness of the method. Therefore, we propose a new solution. First, referring to the existing research on image classification methods, we propose a benchmark to verify the realistic robustness of the grasp detection model. Second, to improve the robustness of the model, we randomly transfer texture knowledge from other images to provide variable texture information for network training. This forces the model to rely more on the contour features of the object than on the texture when making decisions; we call this approach 'context-free.' We verify the effectiveness of our method for robustness enhancement in various grasp tasks and test the proposed method in a real robot grasping scene.
AB - The vision-based grasp detection method is an important technique used to research the grasping task of robots. Unfortunately, the performances of these research methods in practice are worse than the state-of-the-art accuracy on public datasets, because shifts in data distribution owing to real-world conditions are common in the real world and neural network-based methods are sensitive to small data changes. These disturbances mainly change the image texture, causing the performance of grasp detection methods to decline sharply. However, the evaluation metric of existing models does not reflect the actual robustness of the method. Therefore, we propose a new solution. First, referring to the existing research on image classification methods, we propose a benchmark to verify the realistic robustness of the grasp detection model. Second, to improve the robustness of the model, we randomly transfer texture knowledge from other images to provide variable texture information for network training. This forces the model to rely more on the contour features of the object than on the texture when making decisions; we call this approach 'context-free.' We verify the effectiveness of our method for robustness enhancement in various grasp tasks and test the proposed method in a real robot grasping scene.
KW - Context-free
KW - deep neural network
KW - grasp robustness
KW - robotic grasp detection
UR - http://www.scopus.com/inward/record.url?scp=85121825157&partnerID=8YFLogxK
U2 - 10.1109/TIE.2021.3134078
DO - 10.1109/TIE.2021.3134078
M3 - 文章
AN - SCOPUS:85121825157
SN - 0278-0046
VL - 69
SP - 13121
EP - 13130
JO - IEEE Transactions on Industrial Electronics
JF - IEEE Transactions on Industrial Electronics
IS - 12
ER -