Diverse randomized value functions: A provably pessimistic approach for offline reinforcement learning

Xudong Yu, Chenjia Bai, Hongyi Guo, Changhong Wang, Zhen Wang

Research output: Contribution to journalArticlepeer-review

Abstract

Offline Reinforcement Learning (RL) faces challenges such as distributional shift and unreliable value estimation, especially for out-of-distribution (OOD) actions. To address these issues, existing uncertainty-based methods penalize the value function with uncertainty quantification and require numerous ensemble networks, leading to computational challenges and suboptimal outcomes. In this paper, we introduce a novel strategy that employs diverse randomized value functions to estimate the posterior distribution of Q-values. This approach provides robust uncertainty quantification and estimates the lower confidence bounds (LCB) of Q-values. By applying moderate value penalties for OOD actions, our method fosters a provably pessimistic approach. We also emphasize diversity within randomized value functions and enhance efficiency by introducing a diversity regularization method, thereby reducing the requisite number of networks. These modules result in reliable value estimation and efficient policy learning from offline data. Theoretical analysis shows that our method recovers the provably efficient LCB-penalty under linear MDP assumptions. Extensive empirical results demonstrate that our proposed method significantly outperforms baseline methods in terms of performance and parametric efficiency.

Original languageEnglish
Article number121146
JournalInformation Sciences
Volume680
DOIs
StatePublished - Oct 2024

Keywords

  • Distributional shift
  • Diversification
  • Offline reinforcement learning
  • Pessimism
  • Randomized value functions

Fingerprint

Dive into the research topics of 'Diverse randomized value functions: A provably pessimistic approach for offline reinforcement learning'. Together they form a unique fingerprint.

Cite this