Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Autonomous Transfer for Reinforcement Learning

Matthew E. Taylor, Gregory Kuhlmann, and Peter Stone. Autonomous Transfer for Reinforcement Learning. In Proceedings of the Seventh International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp. 283–290, May 2008. 22% acceptance rate
AAMAS-2008

Download

[PDF]233.7kB  

Abstract

Recent work in transfer learning has succeeded in making reinforcement learning algorithms more efficient by incorporating knowledge from previous tasks. However, such methods typically must be provided either a full model of the tasks or an explicit relation mapping one task into the other. An autonomous agent may not have access to such high-level information, but would be able to analyze its experience to find similarities between tasks. In this paper we introduce Modeling Approximate State Transitions by Exploiting Regression (MASTER), a method for automatically learning a mapping from one task to another through an agent's experience. We empirically demonstrate that such learned relationships can significantly improve the speed of a reinforcement learning algorithm in a series of Mountain Car tasks. Additionally, we demonstrate that our method may also assist with the difficult problem of task selection for transfer.

BibTeX Entry

@InProceedings{AAMAS08-taylor,
        author="Matthew E.\ Taylor and Gregory Kuhlmann and Peter Stone",
        title="Autonomous Transfer for Reinforcement Learning",
        booktitle="Proceedings of the Seventh International Joint Conference on
        Autonomous Agents and Multiagent Systems ({AAMAS})",
        month="May",
        year="2008", 
        pages="283--290",
        abstract={Recent work in transfer learning has succeeded in
                  making reinforcement learning algorithms more
                  efficient by incorporating knowledge from previous
                  tasks. However, such methods typically must be
                  provided either a full model of the tasks or an
                  explicit relation mapping one task into the
                  other. An autonomous agent may not have access to
                  such high-level information, but would be able to
                  analyze its experience to find similarities between
                  tasks. In this paper we introduce Modeling
                  Approximate State Transitions by Exploiting
                  Regression (MASTER), a method for automatically
                  learning a mapping from one task to another through
                  an agent's experience. We empirically demonstrate
                  that such learned relationships can significantly
                  improve the speed of a reinforcement learning
                  algorithm in a series of Mountain Car
                  tasks. Additionally, we demonstrate that our method
                  may also assist with the difficult problem of task
                  selection for transfer.},
note = {22% acceptance rate},
       wwwnote={<a href="http://gaips.inesc-id.pt/aamas2008/">AAMAS-2008</a>},
}       

Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:10