Resource Reservation in C-V2X Networks for Dynamic Traffic Environments: From Vehicle Density-Driven to Deep Reinforcement Learning

Xingkai Zhou, Fei Hui, Jiajia Liu, Wenbo Wang, Junfei Zhang

Research output: Contribution to journalArticlepeer-review

Abstract

Cellular Vehicle-to-Everything (C-V2X) Mode 4, specified by 3GPP, enables distributed resource reservation in vehicular networks under out-of-coverage conditions. However, highly dynamic traffic and complex road environments can lead to increased channel contention, packet collisions, and transmission delays, significantly degrading system performance. To address these challenges, this paper develops a theoretical model to analyze the impact of vehicle density on key performance parameters, and proposes a Vehicle Density-driven Adaptive Resource Reservation (VD-ARR) method to dynamically adjust reservation parameters and mitigate persistent collision issues. A two-dimensional discrete-time Markov chain (DTMC) model is constructed to derive steady-state probability expressions, enabling a quantitative evaluation of VD-ARR. Building on this foundation, a VD-ARR Guided Double Deep Q-Network (VG-DDQN) framework is developed, integrating the analytical insights into a reinforcement learning architecture to enhance resource allocation adaptability. Simulation results show that the proposed VD-ARR achieves lower latency and reduced collision probability under varying vehicle densities, while VG-DDQN outperforms VD-ARR in highly dynamic environments, offering superior adaptability and robustness.

Original languageEnglish
JournalIEEE Transactions on Vehicular Technology
DOIs
StateAccepted/In press - 2025

Keywords

  • Adaptive
  • C-V2X
  • Deep reinforcement Learning
  • Discrete-time Markov
  • Distributed resource management

Cite this