TY - JOUR
T1 - An Improved SAC-Based Deep Reinforcement Learning Framework for Collaborative Pushing and Grasping in Underwater Environments
AU - Gao, Jian
AU - Li, Yufeng
AU - Chen, Yimin
AU - He, Yaozhen
AU - Guo, Jingwei
N1 - Publisher Copyright:
© 1963-2012 IEEE.
PY - 2024
Y1 - 2024
N2 - Autonomous grasping is a fundamental task for underwater robots, but direct grasping for tightly stacked objects will lead to collisions and grasp failures, which requires pushing actions to separate the target object and increase grasp success (GS) rates. Hence, this article proposes a novel approach by employing an improved soft actor-critic (SAC) algorithm within a deep reinforcement learning (RL) framework for achieving collaborative pushing and grasping actions. The developed scheme employs an end-to-end control strategy that maps input images to actions. Specifically, an attention mechanism is introduced in the visual perception module to extract the necessary features for pushing and grasping actions to enhance the training strategy. Moreover, a novel pushing reward function is designed, comprising a per-object distribution function around the target and a global object distribution assessment network named PA-Net. Furthermore, an enhanced experience replay strategy is introduced to address the sparsity issue of grasp action rewards. Finally, a training environment for underwater manipulators is established, in which variations in light, water flow noise, and pressure effects are incorporated to simulate underwater work conditions more realistically. The simulation and real-world experiments demonstrate that the proposed learning strategy efficiently separates target objects and avoids inefficient pushing actions, achieving a significantly higher GS rate.
AB - Autonomous grasping is a fundamental task for underwater robots, but direct grasping for tightly stacked objects will lead to collisions and grasp failures, which requires pushing actions to separate the target object and increase grasp success (GS) rates. Hence, this article proposes a novel approach by employing an improved soft actor-critic (SAC) algorithm within a deep reinforcement learning (RL) framework for achieving collaborative pushing and grasping actions. The developed scheme employs an end-to-end control strategy that maps input images to actions. Specifically, an attention mechanism is introduced in the visual perception module to extract the necessary features for pushing and grasping actions to enhance the training strategy. Moreover, a novel pushing reward function is designed, comprising a per-object distribution function around the target and a global object distribution assessment network named PA-Net. Furthermore, an enhanced experience replay strategy is introduced to address the sparsity issue of grasp action rewards. Finally, a training environment for underwater manipulators is established, in which variations in light, water flow noise, and pressure effects are incorporated to simulate underwater work conditions more realistically. The simulation and real-world experiments demonstrate that the proposed learning strategy efficiently separates target objects and avoids inefficient pushing actions, achieving a significantly higher GS rate.
KW - Attention mechanism
KW - collaborative actions
KW - deep reinforcement learning (RL)
KW - pushing-grasping
KW - reward function
KW - underwater manipulator
UR - http://www.scopus.com/inward/record.url?scp=85188433372&partnerID=8YFLogxK
U2 - 10.1109/TIM.2024.3379048
DO - 10.1109/TIM.2024.3379048
M3 - 文章
AN - SCOPUS:85188433372
SN - 0018-9456
VL - 73
SP - 1
EP - 14
JO - IEEE Transactions on Instrumentation and Measurement
JF - IEEE Transactions on Instrumentation and Measurement
M1 - 2512814
ER -