Policy Learning for Nonlinear Model Predictive Control With Application to USVs

Rizhong Wang, Huiping Li, Bin Liang, Yang Shi, Demin Xu

Research output: Contribution to journalArticlepeer-review

22 Scopus citations

Abstract

The unaffordable computation load of nonlinear model predictive control (NMPC) has prevented it from being used in robots with high sampling rates for decades. This article is concerned with the policy learning problem for nonlinear MPC with system constraints, where the nonlinear MPC policy is learned offline and deployed online to resolve the computational complexity issue. A deep neural networks (DNN)-based policy learning MPC (PL-MPC) method is proposed to avoid solving nonlinear optimal control problems online. The detailed policy learning method is developed and the PL-MPC algorithm is designed. The strategy to ensure the practical feasibility of policy implementation is proposed, and it is theoretically proved that the closed-loop system under the proposed method is asymptotically stable in probability. In addition, we apply the PL-MPC algorithm successfully to the motion control of unmanned surface vehicles (USVs). It is shown that the proposed algorithm can be implemented at a sampling rate up to 5 Hz with high-precision motion control.

Original languageEnglish
Pages (from-to)4089-4097
Number of pages9
JournalIEEE Transactions on Industrial Electronics
Volume71
Issue number4
DOIs
StatePublished - 1 Apr 2024

Keywords

  • Constraints
  • deep neural networks (DNN)
  • model predictive control (MPC)
  • policy learning

Fingerprint

Dive into the research topics of 'Policy Learning for Nonlinear Model Predictive Control With Application to USVs'. Together they form a unique fingerprint.

Cite this