TY - JOUR
T1 - A Novel Generative Convolutional Neural Network for Robot Grasp Detection on Gaussian Guidance
AU - Li, Yuanhao
AU - Liu, Yu
AU - Ma, Zhiqiang
AU - Huang, Panfeng
N1 - Publisher Copyright:
© 1963-2012 IEEE.
PY - 2022
Y1 - 2022
N2 - The vision-based grasp detection method is an important research direction in the field of robotics. However, due to the rectangle metric of the grasp detection rectangle's limitation, a false-positive grasp occurs, resulting in the failure of the real-world robot grasp task. In this article, we propose a novel generative convolutional neural network model to improve the accuracy and robustness of robot grasp detection in real-world scenes. First, a Gaussian-based guided training method is used to encode the quality of the grasp point and grasp angle in the grasp pose, highlighting the highest-quality grasp point position and grasp angle and reducing the generation of false-positive grasps. Simultaneously, deformable convolution is used to obtain the shape features of the object in order to guide the subsequent network to the position. Furthermore, a global-local feature fusion method is introduced in order to efficiently obtain finer features during the feature reconstruction stage, allowing the network to focus on the features of the grasped objects. On the Cornell Grasping datasets and Jacquard datasets, our method achieves an excellent performance of 99.0% and 95.9% detection accuracy, respectively. Finally, the proposed method is put to the test in a real-world robot grasping scenario.
AB - The vision-based grasp detection method is an important research direction in the field of robotics. However, due to the rectangle metric of the grasp detection rectangle's limitation, a false-positive grasp occurs, resulting in the failure of the real-world robot grasp task. In this article, we propose a novel generative convolutional neural network model to improve the accuracy and robustness of robot grasp detection in real-world scenes. First, a Gaussian-based guided training method is used to encode the quality of the grasp point and grasp angle in the grasp pose, highlighting the highest-quality grasp point position and grasp angle and reducing the generation of false-positive grasps. Simultaneously, deformable convolution is used to obtain the shape features of the object in order to guide the subsequent network to the position. Furthermore, a global-local feature fusion method is introduced in order to efficiently obtain finer features during the feature reconstruction stage, allowing the network to focus on the features of the grasped objects. On the Cornell Grasping datasets and Jacquard datasets, our method achieves an excellent performance of 99.0% and 95.9% detection accuracy, respectively. Finally, the proposed method is put to the test in a real-world robot grasping scenario.
KW - Gaussian-based guided training (GGT)
KW - global-local feature fusion (GLFF)
KW - robotic grasp detection
UR - http://www.scopus.com/inward/record.url?scp=85137611835&partnerID=8YFLogxK
U2 - 10.1109/TIM.2022.3203118
DO - 10.1109/TIM.2022.3203118
M3 - 文章
AN - SCOPUS:85137611835
SN - 0018-9456
VL - 71
JO - IEEE Transactions on Instrumentation and Measurement
JF - IEEE Transactions on Instrumentation and Measurement
M1 - 2517510
ER -