TY - JOUR
T1 - Ensemble successor representations for task generalization in offline-to-online reinforcement learning
AU - Wang, Changhong
AU - Yu, Xudong
AU - Bai, Chenjia
AU - Zhang, Qiaosheng
AU - Wang, Zhen
N1 - Publisher Copyright:
© Science China Press 2024.
PY - 2024/7
Y1 - 2024/7
N2 - In reinforcement learning (RL), training a policy from scratch with online experiences can be inefficient because of the difficulties in exploration. Recently, offline RL provides a promising solution by giving an initialized offline policy, which can be refined through online interactions. However, existing approaches primarily perform offline and online learning in the same task, without considering the task generalization problem in offline-to-online adaptation. In real-world applications, it is common that we only have an offline dataset from a specific task while aiming for fast online-adaptation for several tasks. To address this problem, our work builds upon the investigation of successor representations for task generalization in online RL and extends the framework to incorporate offline-to-online learning. We demonstrate that the conventional paradigm using successor features cannot effectively utilize offline data and improve the performance for the new task by online fine-tuning. To mitigate this, we introduce a novel methodology that leverages offline data to acquire an ensemble of successor representations and subsequently constructs ensemble Q functions. This approach enables robust representation learning from datasets with different coverage and facilitates fast adaption of Q functions towards new tasks during the online fine-tuning phase. Extensive empirical evaluations provide compelling evidence showcasing the superior performance of our method in generalizing to diverse or even unseen tasks.
AB - In reinforcement learning (RL), training a policy from scratch with online experiences can be inefficient because of the difficulties in exploration. Recently, offline RL provides a promising solution by giving an initialized offline policy, which can be refined through online interactions. However, existing approaches primarily perform offline and online learning in the same task, without considering the task generalization problem in offline-to-online adaptation. In real-world applications, it is common that we only have an offline dataset from a specific task while aiming for fast online-adaptation for several tasks. To address this problem, our work builds upon the investigation of successor representations for task generalization in online RL and extends the framework to incorporate offline-to-online learning. We demonstrate that the conventional paradigm using successor features cannot effectively utilize offline data and improve the performance for the new task by online fine-tuning. To mitigate this, we introduce a novel methodology that leverages offline data to acquire an ensemble of successor representations and subsequently constructs ensemble Q functions. This approach enables robust representation learning from datasets with different coverage and facilitates fast adaption of Q functions towards new tasks during the online fine-tuning phase. Extensive empirical evaluations provide compelling evidence showcasing the superior performance of our method in generalizing to diverse or even unseen tasks.
KW - ensembles
KW - offline reinforcement learning
KW - online fine-tuning
KW - successor representations
KW - task generalization
UR - http://www.scopus.com/inward/record.url?scp=85197948890&partnerID=8YFLogxK
U2 - 10.1007/s11432-023-4028-1
DO - 10.1007/s11432-023-4028-1
M3 - 文章
AN - SCOPUS:85197948890
SN - 1674-733X
VL - 67
JO - Science China Information Sciences
JF - Science China Information Sciences
IS - 7
M1 - 172203
ER -