Reinforcement Learning and Beyond, Part II: Transfer Learning in RL

A half day tutorial at AAMAS-09

Part 2 of a 3 part tutorial: Sunday May 10, Afternoon

Back to Reinforcement Learning and Beyond



Slides
Part 1
Part 2a
Part 2b
Part 3



Overview

Machine learning algorithms often require a large amount of data to solve a given task, even when similar tasks have already been solved. The insight of transfer learning (TL) is that by using data from one or more related source tasks, it is possible for an agent or set of agents to learn a target task with less data, which may be expensive to gather. A recent DARPA program; a set of workshops at NIPS, ICML, and AAAI; and a number of recent papers published in top venues have helped to increase interest in TL. One emerging use of transfer is in conjunction with reinforcement learning (RL). Recent works combining TL and RL show that improved data efficiency may directly translate to improved learning performance, a critical problem for sequential decision-making problems in which the collection of large amounts of data by direct interaction is unfeasible (e.g., autonomous robotics). Although the work on TL in RL provides many techniques to address a number of different problems, it is still difficult for non-specialists to select the most suitable method for a given task or RL learning algorithm.

The objectives of this tutorial are threefold. First, we introduce the key goals of transfer and briefly review techniques developed in a wide range of learning settings. Second, we survey successful approaches to transfer in RL while emphasizing when and how different methods can be utilized. Third, we discuss a set of open problems and research directions. With this tutorial, we expect that participants will better be prepared to both utilize and improve upon TL methods in RL domains.




Outline

We organize the tutorial as a half-day intensive course on transfer in RL. The tutorial is intended to be self-contained and will provide the basic tools of RL needed to understand discussed transfer methods. Recent research directions in TL in RL will be organized according to their main characteristics, such as the type of transferred knowledge, the class of RL algorithm utilized, and task relatedness. In particular, we start by considering the most simple case in which all tasks share the same state variables and actions. In this case, it is important to select the right type of knowledge to transfer (e.g., value functions, policies, instances) and a primary issue is avoiding negative transfer, in which learning performance actually decreases due to transfer. We will build up to discussing significantly more complex solutions that transfer between tasks with different state variables and different actions, emphasizing how agents with different sensors and actuators may effectively share knowledge. We will show how a suitable mapping between two tasks can be provided to the transfer algorithm, and how it can be learn autonomously, even if agents in the source task and target task are substantially different. A set of representative transfer techniques will be detailed. Finally, we will draw conclusions about the state of transfer in RL and we will discuss a selection of open issues in the field.




Registration

Registration is handled directly by AAMAS.




Targeted Audience

Reinforcement learning is one of the most relevant learning techniques for the AAMAS community as it is explicitly agent-centric. Although RL has had significant successes, in many cases simple learning methods are too data-inefficient to be useful. Transfer is a general technique which may be used to improve single- and multi-agent learning performance. However, no coherent analysis or summary of the current transfer research is available. This tutorial will serve to provide non-experts with an introduction and overview of the field. Additionally, our goal is that this tutorial will provide researchers with insights about how TL techniques can be adapted to other areas of research relevant to AAMAS, such as transfer in stochastic games, transfer between POMDP planners, transfer between agents with different ontologies or representation languages, etc.




About the Presenters

Alex

Alessandro Lazaric is a postdoc at the INRIA Lille-Nord Europe research center, working in the SequeL group with Remi Munos. He graduated magna cum laude in computer engineering at the Politecnico di Milano, Italy, in 2004. In 2005, he received a Master of Science at College of Engineering at University of Illinois at Chicago (UIC). In 2005, he began a Ph.D. with support from MIURST (The Italian Ministry for University and Research). He received his doctorate from the Department of Electronics and Information at the Politecnico di Milano, in May 2008 with a thesis on Knowledge Transfer in Reinforcement Learning. Current research interests include transfer learning, reinforcement learning, and multi-arm bandit problems.
Website


Matt

Matthew E. Taylor is a postdoctoral research associate at the University of Southern California working with Milind Tambe. He graduated magna cum laude with a double major in computer science and physics from Amherst College in 2001. After working for two years as a software developer, he began his Ph.D. with support from the College of Natural Sciences' MCD fellowship. He received his doctorate from the Department of Computer Sciences at the University of Texas at Austin in the summer of 2008 after working under Prof. Peter Stone. Current research interests include transfer learning, reinforcement learning, human-agent interaction, and multi-agent systems.
Website




Questions?

Further questions may be dirrected to the corresponding presenter: taylorm@usc.edu.