TY - JOUR
T1 - Energy-Efficient Multi-AAV Collaborative Reliable Storage
T2 - A Deep Reinforcement Learning Approach
AU - Huang, Zhaoxiang
AU - Yu, Zhiwen
AU - Huang, Zhijie
AU - Zhou, Huan
AU - Yang, Erhe
AU - Yu, Ziyue
AU - Xu, Jiangyan
AU - Guo, Bin
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2025
Y1 - 2025
N2 - Autonomous aerial vehicle (AAV) crowdsensing, as a complement to mobile crowdsensing, can provide ubiquitous sensing in extreme environments and has gathered significant attention in recent years. In this article, we investigate the issue of sensing data storage in AAV crowdsensing without edge assistance, where sensing data is stored locally in the AAVs. In this scenario, replication scheme is usually adopted to ensure data availability, and our objective is to find an optimal replica distribution scheme to maximize data availability while minimizing system energy consumption. Given the NPhard nature of the optimization problem, traditional methods cannot achieve optimal solutions within limited timeframes. Therefore, we propose a centralized training and decentralized execution deep reinforcement learning (DRL) algorithm based on actor–critic, named “MUCRS-DRL.” Specifically, this method derives the optimal replica placement scheme based on AAV state information and data file information. Simulation results show that compared to the baseline methods, the proposed algorithm reduces data loss rate, time consumption, and energy consumption by up to 88%, 11%, and 11%, respectively.
AB - Autonomous aerial vehicle (AAV) crowdsensing, as a complement to mobile crowdsensing, can provide ubiquitous sensing in extreme environments and has gathered significant attention in recent years. In this article, we investigate the issue of sensing data storage in AAV crowdsensing without edge assistance, where sensing data is stored locally in the AAVs. In this scenario, replication scheme is usually adopted to ensure data availability, and our objective is to find an optimal replica distribution scheme to maximize data availability while minimizing system energy consumption. Given the NPhard nature of the optimization problem, traditional methods cannot achieve optimal solutions within limited timeframes. Therefore, we propose a centralized training and decentralized execution deep reinforcement learning (DRL) algorithm based on actor–critic, named “MUCRS-DRL.” Specifically, this method derives the optimal replica placement scheme based on AAV state information and data file information. Simulation results show that compared to the baseline methods, the proposed algorithm reduces data loss rate, time consumption, and energy consumption by up to 88%, 11%, and 11%, respectively.
KW - Deep reinforcement learning (DRL)
KW - autonomous aerial vehicle (AAV) crowdsensing
KW - energy-efficient
KW - multi-autonomous aerial vehicle
KW - reliable storage
KW - time-varying
UR - http://www.scopus.com/inward/record.url?scp=85218901933&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2025.3545418
DO - 10.1109/JIOT.2025.3545418
M3 - 文章
AN - SCOPUS:85218901933
SN - 2327-4662
VL - 12
SP - 20913
EP - 20926
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
IS - 12
ER -