TY - JOUR
T1 - Resource Reservation in C-V2X Networks for Dynamic Traffic Environments
T2 - From Vehicle Density-Driven to Deep Reinforcement Learning
AU - Zhou, Xingkai
AU - Hui, Fei
AU - Liu, Jiajia
AU - Wang, Wenbo
AU - Zhang, Junfei
N1 - Publisher Copyright:
© 1967-2012 IEEE.
PY - 2025
Y1 - 2025
N2 - Cellular Vehicle-to-Everything (C-V2X) Mode 4, specified by 3GPP, enables distributed resource reservation in vehicular networks under out-of-coverage conditions. However, highly dynamic traffic and complex road environments can lead to increased channel contention, packet collisions, and transmission delays, significantly degrading system performance. To address these challenges, this paper develops a theoretical model to analyze the impact of vehicle density on key performance parameters, and proposes a Vehicle Density-driven Adaptive Resource Reservation (VD-ARR) method to dynamically adjust reservation parameters and mitigate persistent collision issues. A two-dimensional discrete-time Markov chain (DTMC) model is constructed to derive steady-state probability expressions, enabling a quantitative evaluation of VD-ARR. Building on this foundation, a VD-ARR Guided Double Deep Q-Network (VG-DDQN) framework is developed, integrating the analytical insights into a reinforcement learning architecture to enhance resource allocation adaptability. Simulation results show that the proposed VD-ARR achieves lower latency and reduced collision probability under varying vehicle densities, while VG-DDQN outperforms VD-ARR in highly dynamic environments, offering superior adaptability and robustness.
AB - Cellular Vehicle-to-Everything (C-V2X) Mode 4, specified by 3GPP, enables distributed resource reservation in vehicular networks under out-of-coverage conditions. However, highly dynamic traffic and complex road environments can lead to increased channel contention, packet collisions, and transmission delays, significantly degrading system performance. To address these challenges, this paper develops a theoretical model to analyze the impact of vehicle density on key performance parameters, and proposes a Vehicle Density-driven Adaptive Resource Reservation (VD-ARR) method to dynamically adjust reservation parameters and mitigate persistent collision issues. A two-dimensional discrete-time Markov chain (DTMC) model is constructed to derive steady-state probability expressions, enabling a quantitative evaluation of VD-ARR. Building on this foundation, a VD-ARR Guided Double Deep Q-Network (VG-DDQN) framework is developed, integrating the analytical insights into a reinforcement learning architecture to enhance resource allocation adaptability. Simulation results show that the proposed VD-ARR achieves lower latency and reduced collision probability under varying vehicle densities, while VG-DDQN outperforms VD-ARR in highly dynamic environments, offering superior adaptability and robustness.
KW - Adaptive
KW - C-V2X
KW - Deep reinforcement Learning
KW - Discrete-time Markov
KW - Distributed resource management
UR - http://www.scopus.com/inward/record.url?scp=105008025310&partnerID=8YFLogxK
U2 - 10.1109/TVT.2025.3578083
DO - 10.1109/TVT.2025.3578083
M3 - 文章
AN - SCOPUS:105008025310
SN - 0018-9545
JO - IEEE Transactions on Vehicular Technology
JF - IEEE Transactions on Vehicular Technology
ER -