Optimized control for human-multi-robot collaborative manipulation via multi-player Q-learning

Xing Liu, Panfeng Huang, Shuzhi Sam Ge

Research output: Contribution to journalArticlepeer-review

12 Scopus citations

Abstract

In this paper, optimized interaction control is investigated for human-multi-robot collaboration control problems, which cannot be described by the traditional impedance controller. To realize global optimized interaction performance, the multi-player non-zero sum game theory is employed to obtain the optimized interaction control of each robot agent. Regarding the game strategies, Nash equilibrium strategy is utilized in this paper. In human-multi-robot collaboration problems, the dynamics parameters of the human arm and the manipulated object are usually unknown. To obviate the dependence on these parameters, the multi-player Q-learning method is employed. Moreover, for the human-multi-robot collaboration problem, the optimized solution is difficult to resolve due to the existence of the desired reference position. A multi-player Nash Q-learning algorithm considering the desired reference position is proposed to deal with the problem. The validity of the proposed method is verified through simulation studies.

Original languageEnglish
Pages (from-to)5639-5658
Number of pages20
JournalJournal of the Franklin Institute
Volume358
Issue number11
DOIs
StatePublished - Jul 2021

Fingerprint

Dive into the research topics of 'Optimized control for human-multi-robot collaborative manipulation via multi-player Q-learning'. Together they form a unique fingerprint.

Cite this