Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Transfer Learning for Reinforcement Learning on a Physical Robot

Samuel Barrett, Matthew E. Taylor, and Peter Stone. Transfer Learning for Reinforcement Learning on a Physical Robot. In Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-10), May 2010.
ALA-10

Download

[PDF]684.9kB  

Abstract

As robots become more widely available, many capabilities that wereonce only practical to develop and test in simulation are becomingfeasible on real, physically grounded, robots. This newfoundfeasibility is important because simulators rarely represent the worldwith sufficient fidelity that developed behaviors will work as desiredin the real world. However, development and testing on robots remainsdifficult and time consuming, so it is desirable to minimize thenumber of trials needed when developing robot behaviors.
This paper focuses on reinforcement learning (RL) on physicallygrounded robots. A few noteworthy exceptions notwithstanding, RL hastypically been done purely in simulation, or, at best, initially insimulation with the eventual learned behaviors run on a real robot.However, some recent RL methods exhibit sufficiently low samplecomplexity to enable learning entirely on robots. One such method istransfer learning for RL. The main contribution of this paper is thefirst empirical demonstration that transfer learning can significantlyspeed up and even improve asymptotic performance of RL done entirelyon a physical robot. In addition, we show that transferringinformation learned in simulation can bolster additional learning onthe robot.

BibTeX Entry

@inproceedings(ALA10-Barrett,
  author="Samuel Barrett and Matthew E.\ Taylor and Peter Stone",
  title="Transfer Learning for Reinforcement Learning on a Physical Robot",
  Booktitle="Proceedings of the Adaptive and Learning Agents workshop (at AAMAS-10)",
  month="May",
  year= "2010",
  wwwnote={<a href="http://www-users.cs.york.ac.uk/~grzes/ala10/">ALA-10</a>},
  abstract={
As robots become more widely available, many capabilities that were
once only practical to develop and test in simulation are becoming
feasible on real, physically grounded, robots.  This newfound
feasibility is important because simulators rarely represent the world
with sufficient fidelity that developed behaviors will work as desired
in the real world.  However, development and testing on robots remains
difficult and time consuming, so it is desirable to minimize the
number of trials needed when developing robot behaviors.
<br>
This paper focuses on reinforcement learning (RL) on physically
grounded robots.  A few noteworthy exceptions notwithstanding, RL has
typically been done purely in simulation, or, at best, initially in
simulation with the eventual learned behaviors run on a real robot.
However, some recent RL methods exhibit sufficiently low sample
complexity to enable learning entirely on robots.  One such method is
transfer learning for RL.  The main contribution of this paper is the
first empirical demonstration that transfer learning can significantly
speed up and even improve asymptotic performance of RL done entirely
on a physical robot.  In addition, we show that transferring
information learned in simulation can bolster additional learning on
the robot.},
)

Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:11