TY - JOUR
T1 - Energy-Efficient Multi-UAV Collaborative Reliable Storage
T2 - A Deep Reinforcement Learning Approach
AU - Huang, Zhaoxiang
AU - Yu, Zhiwen
AU - Huang, Zhijie
AU - Zhou, Huan
AU - Yang, Erhe
AU - Yu, Ziyue
AU - Xu, Jiangyan
AU - Guo, Bin
N1 - Publisher Copyright:
© 2014 IEEE.
PY - 2025
Y1 - 2025
N2 - Unmanned Aerial Vehicle (UAV) crowdsensing, as a complement to Mobile Crowdsensing (MCS), can provide ubiquitous sensing in extreme environments and has gathered significant attention in recent years. In this paper, we investigate the issue of sensing data storage in UAV crowdsensing without edge assistance, where sensing data is stored locally in the UAVs. In this scenario, replication scheme is usually adopted to ensure data availability, and our objective is to find an optimal replica distribution scheme to maximize data availability while minimizing system energy consumption. Given the NP-hard nature of the optimization problem, traditional methods cannot achieve optimal solutions within limited timeframes. Therefore, we propose a centralized training and decentralized execution Deep Reinforcement Learning (DRL) algorithm based on Actor-Critic (AC), named 'MUCRS-DRL". Specifically, this method derives the optimal replica placement scheme based on UAV state information and data file information. Simulation results show that compared to the baseline methods, the proposed algorithm reduces data loss rate, time consumption, and energy consumption by up to 88%, 11%, and 11%, respectively.
AB - Unmanned Aerial Vehicle (UAV) crowdsensing, as a complement to Mobile Crowdsensing (MCS), can provide ubiquitous sensing in extreme environments and has gathered significant attention in recent years. In this paper, we investigate the issue of sensing data storage in UAV crowdsensing without edge assistance, where sensing data is stored locally in the UAVs. In this scenario, replication scheme is usually adopted to ensure data availability, and our objective is to find an optimal replica distribution scheme to maximize data availability while minimizing system energy consumption. Given the NP-hard nature of the optimization problem, traditional methods cannot achieve optimal solutions within limited timeframes. Therefore, we propose a centralized training and decentralized execution Deep Reinforcement Learning (DRL) algorithm based on Actor-Critic (AC), named 'MUCRS-DRL". Specifically, this method derives the optimal replica placement scheme based on UAV state information and data file information. Simulation results show that compared to the baseline methods, the proposed algorithm reduces data loss rate, time consumption, and energy consumption by up to 88%, 11%, and 11%, respectively.
KW - deep reinforcement learning
KW - energy-efficient
KW - multi-UAV
KW - reliable storage
KW - time-varying
KW - UAV crowdsensing
UR - http://www.scopus.com/inward/record.url?scp=85218901933&partnerID=8YFLogxK
U2 - 10.1109/JIOT.2025.3545418
DO - 10.1109/JIOT.2025.3545418
M3 - 文章
AN - SCOPUS:85218901933
SN - 2327-4662
JO - IEEE Internet of Things Journal
JF - IEEE Internet of Things Journal
ER -