Abstract
Policy evaluation algorithms are essential to reinforcement learning due to their ability to predict the performance of a policy. However, there are two long-standing issues lying in this prediction problem that need to be tackled: off-policy stability and on-policy efficiency. The conventional temporal difference (TD) algorithm is known to perform very well in the on-policy setting, yet is not off-policy stable. On the other hand, the gradient TD and emphatic TD algorithms are off-policy stable, but are not on-policy efficient. This paper introduces novel algorithms that are both off-policy stable and on-policy efficient by using the oblique projection method. The empirical experimental results on various domains validate the effectiveness of the proposed approach.
| Original language | English |
|---|---|
| Article number | 8515047 |
| Pages (from-to) | 1831-1840 |
| Number of pages | 10 |
| Journal | IEEE Transactions on Neural Networks and Learning Systems |
| Volume | 30 |
| Issue number | 6 |
| DOIs | |
| State | Published - Jun 2019 |
Keywords
- Off-policy
- policy evaluation
- reinforcement learning (RL)
- temporal difference (TD) learning
Fingerprint
Dive into the research topics of 'Stable and Efficient Policy Evaluation'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver