Abstract
The unaffordable computation load of nonlinear model predictive control (NMPC) has prevented it from being used in robots with high sampling rates for decades. This article is concerned with the policy learning problem for nonlinear MPC with system constraints, where the nonlinear MPC policy is learned offline and deployed online to resolve the computational complexity issue. A deep neural networks (DNN)-based policy learning MPC (PL-MPC) method is proposed to avoid solving nonlinear optimal control problems online. The detailed policy learning method is developed and the PL-MPC algorithm is designed. The strategy to ensure the practical feasibility of policy implementation is proposed, and it is theoretically proved that the closed-loop system under the proposed method is asymptotically stable in probability. In addition, we apply the PL-MPC algorithm successfully to the motion control of unmanned surface vehicles (USVs). It is shown that the proposed algorithm can be implemented at a sampling rate up to 5 Hz with high-precision motion control.
Original language | English |
---|---|
Pages (from-to) | 4089-4097 |
Number of pages | 9 |
Journal | IEEE Transactions on Industrial Electronics |
Volume | 71 |
Issue number | 4 |
DOIs | |
State | Published - 1 Apr 2024 |
Keywords
- Constraints
- deep neural networks (DNN)
- model predictive control (MPC)
- policy learning