A Selective Federated Reinforcement Learning Strategy for Autonomous Driving

Yuchuan Fu, Changle Li, F. Richard Yu, Tom H. Luan, Yao Zhang

科研成果: 期刊稿件文章同行评审

57 引用 (Scopus)

摘要

Currently, the complex traffic environment challenges the fast and accurate response of a connected autonomous vehicle (CAV). More importantly, it is difficult for different CAVs to collaborate and share knowledge. To remedy that, this paper proposes a selective federated reinforcement learning (SFRL) strategy to achieve online knowledge aggregation strategy to improve the accuracy and environmental adaptability of the autonomous driving model. First, we propose a federated reinforcement learning framework that allows participants to use the knowledge of other CAVs to make corresponding actions, thereby realizing online knowledge transfer and aggregation. Second, we use reinforcement learning to train local driving models of CAVs to cope with collision avoidance tasks. Third, considering the efficiency of federated learning (FL) and the additional communication overhead it brings, we propose a CAVs selection strategy before uploading local models. When selecting CAVs, we consider the reputation of CAVs, the quality of local models, and time overhead, so as to select as many high-quality users as possible while considering resources and time constraints. With above strategic processes, our framework can aggregate and reuse the knowledge learned by CAVs traveling in different environments to assist in driving decisions. Extensive simulation results validate that our proposal can improve model accuracy and learning efficiency while reducing communication overhead.

源语言英语
页(从-至)1655-1668
页数14
期刊IEEE Transactions on Intelligent Transportation Systems
24
2
DOI
出版状态已出版 - 1 2月 2023

指纹

探究 'A Selective Federated Reinforcement Learning Strategy for Autonomous Driving' 的科研主题。它们共同构成独一无二的指纹。

引用此