Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Reinforcement Learning Transfer via Sparse Coding

Haitham Bou Ammar, Karl Tuyls, Matthew E. Taylor, Kurt Driessen, and Gerhard Weiss. Reinforcement Learning Transfer via Sparse Coding. In International Conference on Autonomous Agents and Multiagent Systems (AAMAS), June 2012. 20% acceptance rate
AAMAS-12

Download

[PDF]286.7kB  

Abstract

Although reinforcement learning (RL) has been successfully deployedin a variety of tasks, learning speed remains a fundamentalproblem for applying RL in complex environments. Transfer learningaims to ameliorate this shortcoming by speeding up learningthrough the adaptation of previously learned behaviors in similartasks. Transfer techniques often use an inter-task mapping, whichdetermines how a pair of tasks are related. Instead of relying on ahand-coded inter-task mapping, this paper proposes a novel transferlearning method capable of autonomously creating an inter-taskmapping by using a novel combination of sparse coding, sparseprojection learning and sparse Gaussian processes. We also proposetwo new transfer algorithms (TrLSPI and TrFQI) based onleast squares policy iteration and fitted-Q-iteration. Experimentsnot only show successful transfer of information between similartasks, inverted pendulum to cart pole, but also between two verydifferent domains: mountain car to cart pole. This paper empiricallyshows that the learned inter-task mapping can be successfullyused to (1) improve the performance of a learned policy on a fixednumber of environmental samples, (2) reduce the learning timesneeded by the algorithms to converge to a policy on a fixed numberof samples, and (3) converge faster to a near-optimal policy givena large number of samples.

BibTeX Entry

@inproceedings{12AAMAS-Haitham,
 author="Haitham Bou Ammar and Karl Tuyls and Matthew E.\ Taylor and Kurt Driessen and Gerhard Weiss",
 title="Reinforcement Learning Transfer via Sparse Coding",
 booktitle = {International Conference on Autonomous Agents and Multiagent Systems ({AAMAS})},
 month="June",
 year = {2012},
 note = {20% acceptance rate},
wwwnote = {<a href="http://aamas2012.webs.upv.es">AAMAS-12</a>},
  abstract = "Although reinforcement learning (RL) has been successfully deployed
in a variety of tasks, learning speed remains a fundamental
problem for applying RL in complex environments. Transfer learning
aims to ameliorate this shortcoming by speeding up learning
through the adaptation of previously learned behaviors in similar
tasks. Transfer techniques often use an inter-task mapping, which
determines how a pair of tasks are related. Instead of relying on a
hand-coded inter-task mapping, this paper proposes a novel transfer
learning method capable of autonomously creating an inter-task
mapping by using a novel combination of sparse coding, sparse
projection learning and sparse Gaussian processes. We also propose
two new transfer algorithms (TrLSPI and TrFQI) based on
least squares policy iteration and fitted-Q-iteration. Experiments
not only show successful transfer of information between similar
tasks, inverted pendulum to cart pole, but also between two very
different domains: mountain car to cart pole. This paper empirically
shows that the learned inter-task mapping can be successfully
used to (1) improve the performance of a learned policy on a fixed
number of environmental samples, (2) reduce the learning times
needed by the algorithms to converge to a policy on a fixed number
of samples, and (3) converge faster to a near-optimal policy given
a large number of samples.",
}

Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:10