TY - GEN
T1 - Tactile Active Inference Reinforcement Learning for Efficient Robotic Manipulation Skill Acquisition
AU - Liu, Zihao
AU - Liu, Xing
AU - Zhang, Yizhai
AU - Liu, Zhengxiong
AU - Huang, Panfeng
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Robotic manipulation holds the potential to replace humans in the execution of tedious or dangerous tasks. However, control-based approaches are not suitable due to the difficulty of formally describing open-world manipulation in reality, and the inefficiency of existing learning methods. Therefore, applying manipulation in a wide range of scenarios presents significant challenges. In this study, we propose a novel framework for skill learning in robotic manipulation called Tactile Active Inference Reinforcement Learning (TactileAIRL), aimed at achieving efficient learning. To enhance the performance of reinforcement learning (RL), we introduce active inference, which integrates model-based techniques and intrinsic curiosity into the RL process. This integration improves the algorithm's training efficiency and adaptability to sparse rewards. Additionally, we have designed universal tactile static and dynamic features based on vision-based tactile sensors, making our framework scalable to many manipulation tasks learning involving tactile feedback. Simulation results demonstrate that our method achieves significantly high training efficiency in objects pushing tasks. It enables agents to excel in both dense and sparse reward tasks with just few interaction episodes, surpassing the SAC baseline. Furthermore, we conduct physical experiments on a gripper screwing task using our method, which showcases the algorithm's rapid learning capability and its potential for practical applications.
AB - Robotic manipulation holds the potential to replace humans in the execution of tedious or dangerous tasks. However, control-based approaches are not suitable due to the difficulty of formally describing open-world manipulation in reality, and the inefficiency of existing learning methods. Therefore, applying manipulation in a wide range of scenarios presents significant challenges. In this study, we propose a novel framework for skill learning in robotic manipulation called Tactile Active Inference Reinforcement Learning (TactileAIRL), aimed at achieving efficient learning. To enhance the performance of reinforcement learning (RL), we introduce active inference, which integrates model-based techniques and intrinsic curiosity into the RL process. This integration improves the algorithm's training efficiency and adaptability to sparse rewards. Additionally, we have designed universal tactile static and dynamic features based on vision-based tactile sensors, making our framework scalable to many manipulation tasks learning involving tactile feedback. Simulation results demonstrate that our method achieves significantly high training efficiency in objects pushing tasks. It enables agents to excel in both dense and sparse reward tasks with just few interaction episodes, surpassing the SAC baseline. Furthermore, we conduct physical experiments on a gripper screwing task using our method, which showcases the algorithm's rapid learning capability and its potential for practical applications.
UR - http://www.scopus.com/inward/record.url?scp=85216492236&partnerID=8YFLogxK
U2 - 10.1109/IROS58592.2024.10802750
DO - 10.1109/IROS58592.2024.10802750
M3 - 会议稿件
AN - SCOPUS:85216492236
T3 - IEEE International Conference on Intelligent Robots and Systems
SP - 10884
EP - 10889
BT - 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2024
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2024
Y2 - 14 October 2024 through 18 October 2024
ER -