Reinforcement Learning-Based Opportunistic Routing Protocol Using Depth Information for Energy-Efficient Underwater Wireless Sensor Networks

Chao Wang, Xiaohong Shen, Haiyan Wang, Hongwei Zhang, Haodi Mei

Research output: Contribution to journalArticlepeer-review

36 Scopus citations

Abstract

An efficient routing protocol is critical for the data transmission of underwater wireless sensor networks (UWSNs). Aiming to the problem of void region in UWSNs, this article proposes a reinforcement learning-based opportunistic routing protocol (DROR). By considering the limited energy and underwater environment, DROR is a receiver-based routing protocol, and combines reinforcement learning (RL) with opportunistic routing (OR) to ensure real-time performance of data transmission as well as energy efficiency. To achieve reliable transmission when encountering void regions, a void recovery mechanism is designed to enable packets to bypass void nodes and continue forwarding. Furthermore, a relative Q-based dynamic scheduling strategy is proposed to ensure that packets can efficiently forward along the global optimal routing path. Simulation results show that the proposed protocol performs well in terms of end-to-end delay, reliability, and energy efficiency in UWSNs.

Original languageEnglish
Pages (from-to)17771-17783
Number of pages13
JournalIEEE Sensors Journal
Volume23
Issue number15
DOIs
StatePublished - 1 Aug 2023

Keywords

  • Q-learning
  • opportunistic routing (OR)
  • routing protocol
  • underwater wireless sensor networks (UWSNs)
  • void region

Fingerprint

Dive into the research topics of 'Reinforcement Learning-Based Opportunistic Routing Protocol Using Depth Information for Energy-Efficient Underwater Wireless Sensor Networks'. Together they form a unique fingerprint.

Cite this