Abstract
Many safety-critical or performance-demanding systems are human-in-the-loop, i.e., the robot interacts with human being and environment, in which human-in-the-loop control becomes a key research topic. In this article, a unified optimal interaction control in joint space is presented for multipoint human-robot-environment interaction (HREI) problems, which are very common in human-robot collaborative manipulation tasks. Specifically, model-based reinforcement learning method is leveraged for obtaining optimal interaction control. For multipoint human-robot-environment interaction, the interaction forces exerted on each link are isolated and estimated via the backward generalized momentum observer method. In human-robot-environment interaction problems, the environmental as well as the human arm's dynamics parameters are usually unknown, stochastic, and time-varying. To obviate the dependence on these parameters, the Gaussian mixture modeling / Gaussian mixture regression (GMM/GMR) method is employed to learn the unknown external dynamics. It is noteworthy the interaction forces are considered as system states, the time derivatives of which are computed based on the GMM/GMR learning results via chain-rule. Then, the iterative linear quadratic Gaussian with learned external dynamics (ILQG-LED) method is utilized to realize optimal multipoint HREI control. The validity of the proposed method is verified through experimental studies.
| Original language | English |
|---|---|
| Pages (from-to) | 11474-11482 |
| Number of pages | 9 |
| Journal | IEEE Transactions on Industrial Electronics |
| Volume | 70 |
| Issue number | 11 |
| DOIs | |
| State | Published - 1 Nov 2023 |
Keywords
- Backward generalized momentum observer
- ILQG-LED method
- model-based reinforcement learning
- multipoint human-robot-environment interaction (HREI)
- optimal human-in-the-loop control
Fingerprint
Dive into the research topics of 'Unified Human-Robot-Environment Interaction Control in Contact-Rich Collaborative Manipulation Tasks via Model-Based Reinforcement Learning'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver