Energy-Efficient Multi-AAV Collaborative Reliable Storage: A Deep Reinforcement Learning Approach

Zhaoxiang Huang, Zhiwen Yu, Zhijie Huang, Huan Zhou, Erhe Yang, Ziyue Yu, Jiangyan Xu, Bin Guo

Research output: Contribution to journalArticlepeer-review

Abstract

Autonomous aerial vehicle (AAV) crowdsensing, as a complement to mobile crowdsensing, can provide ubiquitous sensing in extreme environments and has gathered significant attention in recent years. In this article, we investigate the issue of sensing data storage in AAV crowdsensing without edge assistance, where sensing data is stored locally in the AAVs. In this scenario, replication scheme is usually adopted to ensure data availability, and our objective is to find an optimal replica distribution scheme to maximize data availability while minimizing system energy consumption. Given the NPhard nature of the optimization problem, traditional methods cannot achieve optimal solutions within limited timeframes. Therefore, we propose a centralized training and decentralized execution deep reinforcement learning (DRL) algorithm based on actor–critic, named “MUCRS-DRL.” Specifically, this method derives the optimal replica placement scheme based on AAV state information and data file information. Simulation results show that compared to the baseline methods, the proposed algorithm reduces data loss rate, time consumption, and energy consumption by up to 88%, 11%, and 11%, respectively.

Original languageEnglish
Pages (from-to)20913-20926
Number of pages14
JournalIEEE Internet of Things Journal
Volume12
Issue number12
DOIs
StatePublished - 2025

Keywords

  • Deep reinforcement learning (DRL)
  • autonomous aerial vehicle (AAV) crowdsensing
  • energy-efficient
  • multi-autonomous aerial vehicle
  • reliable storage
  • time-varying

Fingerprint

Dive into the research topics of 'Energy-Efficient Multi-AAV Collaborative Reliable Storage: A Deep Reinforcement Learning Approach'. Together they form a unique fingerprint.

Cite this