• Sorted by Date • Classified by Publication Type • Sorted by First Author Last Name • Classified by Research Category •
Matthew E. Taylor, Gregory Kuhlmann, and Peter
Stone. Autonomous Transfer for Reinforcement Learning. In Proceedings of the Seventh International Joint Conference
on Autonomous Agents and Multiagent Systems (AAMAS), pp. 283–290, May 2008. 22% acceptance rate
AAMAS-2008
Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An autonomous agent may not have access to such high-level information, but would be able to analyze its experience to find similarities between tasks. In this paper we introduce Modeling Approximate State Transitions by Exploiting Regression (MASTER), a method for automatically learning a mapping from one task to another through an agent's experience. We empirically demonstrate that such learned relationships can significantly improve the speed of a reinforcement learning algorithm in a series of Mountain Car tasks. Additionally, we demonstrate that our method may also assist with the difficult problem of task selection for transfer.
@InProceedings{AAMAS08-taylor, author="Matthew E.\ Taylor and Gregory Kuhlmann and Peter Stone", title="Autonomous Transfer for Reinforcement Learning", booktitle="Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multiagent Systems ({AAMAS})", month="May", year="2008", pages="283--290", abstract={Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An autonomous agent may not have access to such high-level information, but would be able to analyze its experience to find similarities between tasks. In this paper we introduce Modeling Approximate State Transitions by Exploiting Regression (MASTER), a method for automatically learning a mapping from one task to another through an agent's experience. We empirically demonstrate that such learned relationships can significantly improve the speed of a reinforcement learning algorithm in a series of Mountain Car tasks. Additionally, we demonstrate that our method may also assist with the difficult problem of task selection for transfer.}, note = {22% acceptance rate}, wwwnote={<a href="http://gaips.inesc-id.pt/aamas2008/">AAMAS-2008</a>}, }
Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:10