Energy-Efficient Multi-UAV Collaborative Reliable Storage: A Deep Reinforcement Learning Approach

Zhaoxiang Huang, Zhiwen Yu, Zhijie Huang, Huan Zhou, Erhe Yang, Ziyue Yu, Jiangyan Xu, Bin Guo

Research output: Contribution to journalArticlepeer-review

Abstract

Unmanned Aerial Vehicle (UAV) crowdsensing, as a complement to Mobile Crowdsensing (MCS), can provide ubiquitous sensing in extreme environments and has gathered significant attention in recent years. In this paper, we investigate the issue of sensing data storage in UAV crowdsensing without edge assistance, where sensing data is stored locally in the UAVs. In this scenario, replication scheme is usually adopted to ensure data availability, and our objective is to find an optimal replica distribution scheme to maximize data availability while minimizing system energy consumption. Given the NP-hard nature of the optimization problem, traditional methods cannot achieve optimal solutions within limited timeframes. Therefore, we propose a centralized training and decentralized execution Deep Reinforcement Learning (DRL) algorithm based on Actor-Critic (AC), named 'MUCRS-DRL". Specifically, this method derives the optimal replica placement scheme based on UAV state information and data file information. Simulation results show that compared to the baseline methods, the proposed algorithm reduces data loss rate, time consumption, and energy consumption by up to 88%, 11%, and 11%, respectively.

Original languageEnglish
JournalIEEE Internet of Things Journal
DOIs
StateAccepted/In press - 2025

Keywords

  • deep reinforcement learning
  • energy-efficient
  • multi-UAV
  • reliable storage
  • time-varying
  • UAV crowdsensing

Fingerprint

Dive into the research topics of 'Energy-Efficient Multi-UAV Collaborative Reliable Storage: A Deep Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this