Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Guiding Inference with Policy Search Reinforcement Learning

Matthew E. Taylor, Cynthia Matuszek, Pace Reagan Smith, and Michael Witbrock. Guiding Inference with Policy Search Reinforcement Learning. In Proceedings of the Twentieth International FLAIRS Conference (FLAIRS), May 2007. 52% acceptance rate
FLAIRS-2007

Download

[PDF]138.5kB  

Abstract

Symbolic reasoning is a well understood and effective approach to handling reasoning over formally represented knowledge; however, simple symbolic inference systems necessarily slow as complexity and ground facts grow. As automated approaches to ontology-building become more prevalent and sophisticated, knowledge base systems become larger and more complex, necessitating techniques for faster inference. This work uses reinforcement learning, a statistical machine learning technique, to learn control laws which guide inference. We implement our learning method in ResearchCyc, a very large knowledge base with millions of assertions. A large set of test queries, some of which require tens of thousands of inference steps to answer, can be answered faster after training over an independent set of training queries. Furthermore, this learned inference module outperforms ResearchCyc's integrated inference module, a module that has been hand-tuned with considerable effort.

BibTeX Entry

@InProceedings{FLAIRS07-taylor-inference,
        author="Matthew E.\ Taylor and Cynthia Matuszek and Pace Reagan Smith and Michael Witbrock",
        title="Guiding Inference with Policy Search Reinforcement Learning",
        booktitle="Proceedings of the Twentieth International FLAIRS Conference {(FLAIRS})",
        month="May",year="2007", 
        abstract="Symbolic reasoning is a well understood and
                  effective approach to handling reasoning over
                  formally represented knowledge; however, simple
                  symbolic inference systems necessarily slow as
                  complexity and ground facts grow. As automated
                  approaches to ontology-building become more
                  prevalent and sophisticated, knowledge base systems
                  become larger and more complex, necessitating
                  techniques for faster inference. This work uses
                  reinforcement learning, a statistical machine
                  learning technique, to learn control laws which
                  guide inference. We implement our learning method in
                  ResearchCyc, a very large knowledge base with
                  millions of assertions. A large set of test queries,
                  some of which require tens of thousands of inference
                  steps to answer, can be answered faster after
                  training over an independent set of training
                  queries. Furthermore, this learned inference module
                  outperforms ResearchCyc's integrated inference
                  module, a module that has been hand-tuned with
                  considerable effort.",
note = {52% acceptance rate},
       wwwnote={<a href="http://www.cise.ufl.edu/~ddd/FLAIRS/flairs2007/">FLAIRS-2007</a>},
}

Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:10