TY - JOUR
T1 - Objective-oriented efficient robotic manipulation
T2 - A novel algorithm for real-time grasping in cluttered scenes
AU - Li, Yufeng
AU - Gao, Jian
AU - Chen, Yimin
AU - He, Yaozhen
N1 - Publisher Copyright:
© 2025 Elsevier Ltd
PY - 2025/4
Y1 - 2025/4
N2 - Grasping unknown objects in non-structural environments autonomously is challenging for robotic manipulators, primarily due to the variability in environmental conditions and the unpredictable orientations of objects. To address this issue, this paper proposes a grasping algorithm that can segment the target object from a single view of the scene and generate collision-free 6-DOF(Degrees of Freedom) grasping poses. Initially, we develop a YOLO-CMA algorithm for object recognition in dense scenes. Building upon this, a point cloud segmentation algorithm based on object detection algorithm is used to extract the target object from the scene. Following this, a learning network is designed that takes into account both the target point cloud and the global point cloud. This network can achieve grasping pose generation, grasping pose scoring, and grasping pose collision detection. We integrate these grasping candidates with our bespoke online algorithm to generate the most optimal grasping pose. The recognition results in dense scenes demonstrate that the proposed YOLO-CMA structure can achieve better classification. Furthermore, real experimental with a UR3 manipulator results indicate that the proposed method can achieve real-time grasping of objects, achieving a grasping success rate of 88.3% and a completion rate of 93.3% in cluttered environments.
AB - Grasping unknown objects in non-structural environments autonomously is challenging for robotic manipulators, primarily due to the variability in environmental conditions and the unpredictable orientations of objects. To address this issue, this paper proposes a grasping algorithm that can segment the target object from a single view of the scene and generate collision-free 6-DOF(Degrees of Freedom) grasping poses. Initially, we develop a YOLO-CMA algorithm for object recognition in dense scenes. Building upon this, a point cloud segmentation algorithm based on object detection algorithm is used to extract the target object from the scene. Following this, a learning network is designed that takes into account both the target point cloud and the global point cloud. This network can achieve grasping pose generation, grasping pose scoring, and grasping pose collision detection. We integrate these grasping candidates with our bespoke online algorithm to generate the most optimal grasping pose. The recognition results in dense scenes demonstrate that the proposed YOLO-CMA structure can achieve better classification. Furthermore, real experimental with a UR3 manipulator results indicate that the proposed method can achieve real-time grasping of objects, achieving a grasping success rate of 88.3% and a completion rate of 93.3% in cluttered environments.
KW - Deep learning
KW - Grasping pose detection
KW - Regional point cloud
KW - Target detection
UR - http://www.scopus.com/inward/record.url?scp=85218894771&partnerID=8YFLogxK
U2 - 10.1016/j.compeleceng.2025.110190
DO - 10.1016/j.compeleceng.2025.110190
M3 - 文章
AN - SCOPUS:85218894771
SN - 0045-7906
VL - 123
JO - Computers and Electrical Engineering
JF - Computers and Electrical Engineering
M1 - 110190
ER -