TY - JOUR
T1 - Downlink Transmit Power Control in Ultra-Dense UAV Network Based on Mean Field Game and Deep Reinforcement Learning
AU - Li, Lixin
AU - Cheng, Qianqian
AU - Xue, Kaiyuan
AU - Yang, Chungang
AU - Han, Zhu
N1 - Publisher Copyright:
© 1967-2012 IEEE.
PY - 2020/12
Y1 - 2020/12
N2 - As an emerging technology in 5G, ultra-dense unmanned aerial vehicles (UAVs) network can significantly improve the system capacity and networks coverage. However, it is still a challenge to reduce interference and improve energy efficiency (EE) of UAVs. In this paper, we investigate a downlink power control problem to maximize the EE in an ultra-dense UAV network. Firstly, the power control problem is formulated as a discrete mean field game (MFG) to imitate the interactions among a large number of UAVs, and then the MFG framework is transformed into a Markov decision process (MDP) to obtain the equilibrium solution of the MFG due to the dense deployment of UAVs. Specifically, a deep reinforcement learning-based MFG (DRL-MFG) algorithm is proposed to suppress the interference and maximize the EE by using deep neural networks (DNN) to explore the optimal power strategy for UAVs. The numerical results show that the UAVs can effectively interact with the environment to obtain the optimal power control strategy. Compared with the benchmarks algorithms, the DRL-MFG algorithm converges faster to the solution of MFG and improves the EE of UAVs. Moreover, the impact of the transmit power on EE under the different heights of the UAVs is also analyzed.
AB - As an emerging technology in 5G, ultra-dense unmanned aerial vehicles (UAVs) network can significantly improve the system capacity and networks coverage. However, it is still a challenge to reduce interference and improve energy efficiency (EE) of UAVs. In this paper, we investigate a downlink power control problem to maximize the EE in an ultra-dense UAV network. Firstly, the power control problem is formulated as a discrete mean field game (MFG) to imitate the interactions among a large number of UAVs, and then the MFG framework is transformed into a Markov decision process (MDP) to obtain the equilibrium solution of the MFG due to the dense deployment of UAVs. Specifically, a deep reinforcement learning-based MFG (DRL-MFG) algorithm is proposed to suppress the interference and maximize the EE by using deep neural networks (DNN) to explore the optimal power strategy for UAVs. The numerical results show that the UAVs can effectively interact with the environment to obtain the optimal power control strategy. Compared with the benchmarks algorithms, the DRL-MFG algorithm converges faster to the solution of MFG and improves the EE of UAVs. Moreover, the impact of the transmit power on EE under the different heights of the UAVs is also analyzed.
KW - deep reinforcement learning (DRL)
KW - energy efficiency (EE)
KW - mean field game (MFG)
KW - power control
KW - Unmanned aerial vehicle (UAV)
UR - http://www.scopus.com/inward/record.url?scp=85097949081&partnerID=8YFLogxK
U2 - 10.1109/TVT.2020.3043851
DO - 10.1109/TVT.2020.3043851
M3 - 文章
AN - SCOPUS:85097949081
SN - 0018-9545
VL - 69
SP - 15594
EP - 15605
JO - IEEE Transactions on Vehicular Technology
JF - IEEE Transactions on Vehicular Technology
IS - 12
M1 - 9290094
ER -