Transfer Learning for Complex Tasks
An AAAI-08
workshop.
All machine learning algorithms require data to learn and often the
amount of data available is a limiting factor. Classification requires
labeled data, which may be expensive to obtain. Reinforcement learning
requires samples from an environment which takes time to
gather. Recently, transfer learning (TL) approaches have been
gaining in popularity as an approach to increase learning
performance. Rather than learning a novel target task in
isolation, transfer approaches make use of data from one or more
source tasks in order to learn the target task with less data
or to achieve a higher performance level.
While transfer has long been studied in humans, it was first applied
as a machine learning technique only in the mid-nineties. Although TL
is making rapid progress, there are a number of open questions in the
field, including:
-
How can an appropriate source task be selected for a given
target task?
-
In some situations transfer decreases performance. Is it
possible to avoid negative transfer?
- How can one learn the relationship between a given source and
target task, if such a relationship exists?
- What characteristics determine the effectiveness of transfer?
This workshop will give researchers working in TL an opportunity to
both present their work and to discuss current topics of interest. We
solicit papers that demonstrate empirical success in transferring
knowledge between complex tasks, or introduce transfer methods that
are likely to scale to such problems. We are most interested in work
which examines transfer between reinforcement learning agents, but
transfer between any machine learning algorithms will be in scope for
this workshop. All submissions will be reviewed for relevance,
originality, significance, and clarity. Work will be accepted for
either oral or poster presentation.
Schedule
Each paper will be allocated 20 minutes for a talk and 10 minutes for
questions.
- 9:00-9:30
Introductions and overview
- 9:30-9:50
Transfer Learning for WiFi-based Indoor Localization
Sinno Jialin Pan, Vincent Wenchen Zheng, Qiang Yang, and Derek Hao Hu
slides
- 10:00-10:20
Cyclone Tracking using Multiple Satellite Data Sources via Spacial-Temporal Knowledge Transfer:
Shen-Shyang Ho and Ashit Talukder
slides
- 10:30-11:00
Break
- 11:00-11:20
Transfer in Reinforcement Learning via Markov Logic Networks:
Lisa Torrey, Jude Shavlik, Sriraam Natarajan, Pavan Kuppili, and Trevor Walker
slides
- 11:30-11:50
Deep Transfer Via Second-Order Markov Logic:
Jesse Davis and Pedro Domingos
slides
- 12:00-12:20
Transfer Learning by Mapping with Minimal Target Data:
Lilyana Mihalkova and Raymond J. Mooney
slides
- 12:30-2:00
Lunch
- 2:00-2:20
Learning a Transfer Function for Reinforcement Learning Problems:
Tom Croonenborghs, Kurt Driessens, and Maurice Bruynooghe
slides
- 2:30-2:50
Learning and Transferring Relational Instance-Based Policies:
Rocio Garcia-Duran, Fernando Fernandez, and Daniel Borrajo
slides
- 3:00-3:20
Learning and Transferring Roles in Multi-Agent MDPs:
Aaron Wilson, Alan Fern, Soumya Ray, and Prasad Tadepalli
- 3:30-3:50
Break
- 4:00-4:20
Improving the Separability of a Reservoir Facilitates Learning Transfer:
David Norton and Dan Ventura
slides
- 4:30-4:50
Cross-Domain Transfer of Constraints for Inductive Process Modeling:
Will Bridewell and Ljupco Todorovski
slides
- 5:00-5:30
Panel discussion
Important dates
- Submissions due: April 7, 2008
- Notification: April 21, 2008
- Final papers and consent forms due: May 5, 2008
- Workshop: July 14, 2008 in Chicago at AAAI-08
Organizing Committee: Matthew
E. Taylor (primary contact, The University of Texas at Austin), Alan Fern (Oregon
State University), and Kurt Driessens
(K. U. Leuven)
Senior Steering Committee: Peter Stone (The
University of Texas at Austin), Richard Maclin (The
University of Minnesota), and Jude Shavlik (The
University of Wisconsin at Madison)