Stable and Efficient Policy Evaluation

Daoming Lyu, Bo Liu, Matthieu Geist, Wen Dong, Saad Biaz, Qi Wang

Research output: Contribution to journalArticlepeer-review

8 Scopus citations

Abstract

Policy evaluation algorithms are essential to reinforcement learning due to their ability to predict the performance of a policy. However, there are two long-standing issues lying in this prediction problem that need to be tackled: off-policy stability and on-policy efficiency. The conventional temporal difference (TD) algorithm is known to perform very well in the on-policy setting, yet is not off-policy stable. On the other hand, the gradient TD and emphatic TD algorithms are off-policy stable, but are not on-policy efficient. This paper introduces novel algorithms that are both off-policy stable and on-policy efficient by using the oblique projection method. The empirical experimental results on various domains validate the effectiveness of the proposed approach.

Original languageEnglish
Article number8515047
Pages (from-to)1831-1840
Number of pages10
JournalIEEE Transactions on Neural Networks and Learning Systems
Volume30
Issue number6
DOIs
StatePublished - Jun 2019

Keywords

  • Off-policy
  • policy evaluation
  • reinforcement learning (RL)
  • temporal difference (TD) learning

Fingerprint

Dive into the research topics of 'Stable and Efficient Policy Evaluation'. Together they form a unique fingerprint.

Cite this