Explainable Deep Reinforcement Learning for UAV autonomous path planning

Lei He, Nabil Aouf, Bifeng Song

Research output: Contribution to journalArticlepeer-review

128 Scopus citations

Abstract

Autonomous navigation in unknown environment is still a hard problem for small Unmanned Aerial Vehicles (UAVs). Recently, some neural network-based methods are proposed to tackle this problem, however, the trained network is opaque, non-intuitive and difficult for people to understand, which limits the real-world application. In this paper, a novel explainable deep neural network-based path planner is proposed for quadrotor to fly autonomously in unknown environment. The navigation problem is modelled as a Markov Decision Process (MDP) and the path planner is trained using Deep Reinforcement Learning (DRL) method in simulation environment. To get better understanding of the trained model, a novel model explanation method is proposed based on the feature attribution. Some easy-to-interpret textual and visual explanations are generated to allow end-users to understand what triggered a particular behaviour. Moreover, some global analyses are provided for experts to evaluate and improve the trained network. Finally, real-world flight tests are conducted to illustrate that our path planner trained in the simulation is robust enough to be applied in the real environment directly.

Original languageEnglish
Article number107052
JournalAerospace Science and Technology
Volume118
DOIs
StatePublished - Nov 2021

Keywords

  • Autonomous navigation
  • Deep Reinforcement Learning (DRL)
  • Explainable AI
  • Unmanned Aerial Vehicles (UAVs)

Fingerprint

Dive into the research topics of 'Explainable Deep Reinforcement Learning for UAV autonomous path planning'. Together they form a unique fingerprint.

Cite this