Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Feature Selection and Policy Optimization for Distributed Instruction Placement Using Reinforcement Learning

Katherine K. Coons, Behnam Robatmili, Matthew E.  Taylor, Bertrand A. Maher, Kathryn McKinley, and Doug Burger. Feature Selection and Policy Optimization for Distributed Instruction Placement Using Reinforcement Learning. In Proceedings of the Seventh International Joint Conference on Parallel Architectures and Compilation Techniques (PACT), pp. 32–42, October 2008. 19% acceptance rate
PACT-2008

Download

[PDF]297.8kB  

Abstract

Communication overheads are one of the fundamental challenges in amultiprocessor system. As the number of processors on a chip increases,communication overheads and the distribution of computation and databecome increasingly important performance factors. Explicit DataflowGraph Execution (EDGE) processors, in which instructions communicatewith one another directly on a distributed substrate, give the compilercontrol over communication overheads at a fine granularity. Prior workshows that compilers can effectively reduce fine-grained communicationoverheads in EDGE architectures using a spatial instruction placementalgorithm with a heuristic-based cost function. While this algorithm iseffective, the cost function must be painstakingly tuned. Heuristics tunedto perform well across a variety of applications leave users with littleability to tune performance-critical applications, yet we find that thebest placement heuristics vary significantly with the application.

First, we suggest a systematic feature selection method that reduces thefeature set size based on the extent to which features affect performance.To automatically discover placement heuristics, we then use these featuresas input to a reinforcement learning technique, called Neuro-Evolutionof Augmenting Topologies (NEAT), that uses a genetic algorithm to evolveneural networks. We show that NEAT outperforms simulated annealing, themost commonly used optimization technique for instruction placement. Weuse NEAT to learn general heuristics that are as effective as hand-tunedheuristics, but we find that improving over highly hand-tuned generalheuristics is difficult. We then suggest a hierarchical approachto machine learning that classifies segments of code with similarcharacteristics and learns heuristics for these classes. This approachperforms closer to the specialized heuristics. Together, these resultssuggest that learning compiler heuristics may benefit from both improvedfeature selection and classification.

BibTeX Entry

@InProceedings{PACT08-coons,
  author="Katherine K. Coons and Behnam Robatmili and Matthew E.\
  Taylor and Bertrand A. Maher and Kathryn McKinley and Doug Burger",
  title="Feature Selection and Policy Optimization for Distributed
  Instruction Placement Using Reinforcement Learning",
  booktitle="Proceedings of the Seventh International Joint
        Conference on Parallel Architectures and Compilation
        Techniques ({PACT})",
  month="October",
  year="2008",
  pages="32--42",
note = {19% acceptance rate},
  wwwnote={<a href="http://www.eecg.toronto.edu/pact/">PACT-2008</a>},
  abstract = {Communication overheads are one of the fundamental challenges in a
multiprocessor system. As the number of processors on a chip increases,
communication overheads and the distribution of computation and data
become increasingly important performance factors. Explicit Dataflow
Graph Execution (EDGE) processors, in which instructions communicate
with one another directly on a distributed substrate, give the compiler
control over communication overheads at a fine granularity. Prior work
shows that compilers can effectively reduce fine-grained communication
overheads in EDGE architectures using a spatial instruction placement
algorithm with a heuristic-based cost function. While this algorithm is
effective, the cost function must be painstakingly tuned. Heuristics tuned
to perform well across a variety of applications leave users with little
ability to tune performance-critical applications, yet we find that the
best placement heuristics vary significantly with the application.
<p>
First, we suggest a systematic feature selection method that reduces the
feature set size based on the extent to which features affect performance.
To automatically discover placement heuristics, we then use these features
as input to a reinforcement learning technique, called Neuro-Evolution
of Augmenting Topologies (NEAT), that uses a genetic algorithm to evolve
neural networks. We show that NEAT outperforms simulated annealing, the
most commonly used optimization technique for instruction placement. We
use NEAT to learn general heuristics that are as effective as hand-tuned
heuristics, but we find that improving over highly hand-tuned general
heuristics is difficult. We then suggest a hierarchical approach
to machine learning that classifies segments of code with similar
characteristics and learns heuristics for these classes. This approach
performs closer to the specialized heuristics. Together, these results
suggest that learning compiler heuristics may benefit from both improved
feature selection and classification.
},
}

Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:10