Matthew E. Taylor's Publications

Sorted by DateClassified by Publication TypeSorted by First Author Last NameClassified by Research Category

Distributed learning and multi-objectivity in traffic light control

Tim Tim, Tong T. Pham, and Matthew E. Taylor. Distributed learning and multi-objectivity in traffic light control. Connection Science, 26(1):65–83, 2014.

Download

[PDF]756.1kB  

Abstract

Traffic jams and suboptimal traffic flows are ubiquitous in modern societies, and they create enormous economic losses each year. Delays at traffic lights alone account for roughly 10\% of all delays in US traffic. As most traffic light scheduling systems currently in use are static, set up by human experts rather than being adaptive, the interest in machine learning approaches to this problem has increased in recent years. Reinforcement learning (RL) approaches are often used in these studies, as they require little pre-existing knowledge about traffic flows. Distributed constraint optimisation approaches (DCOP) have also been shown to be successful, but are limited to cases where the traffic flows are known. The distributed coordination of exploration and exploitation (DCEE) framework was recently proposed to introduce learning in the DCOP framework. In this paper, we present a study of DCEE and RL techniques in a complex simulator, illustrating the particular advantages of each, comparing them against standard isolated traffic actuated signals. We analyse how learning and coordination behave under different traffic conditions, and discuss the multi-objective nature of the problem. Finally we evaluate several alternative reward signals in the best performing approach, some of these taking advantage of the correlation between the problem-inherent objectives to improve performance.

BibTeX Entry

@article{14ConnectionScience-Brys,
author = {Tim Tim and Tong T.\ Pham and Matthew E.\ Taylor},
title = {Distributed learning and multi-objectivity in traffic light control},
journal = {Connection Science},
volume = {26},
number = {1},
pages = {65-83},
year = {2014},
doi = {10.1080/09540091.2014.885282},
URL = {http://dx.doi.org/10.1080/09540091.2014.885282},
eprint = {http://dx.doi.org/10.1080/09540091.2014.885282},
abstract = { Traffic jams and suboptimal traffic flows are ubiquitous in modern societies, and they create enormous economic losses each year. Delays at traffic lights alone account for roughly 10\% of all delays in US traffic. As most traffic light scheduling systems currently in use are static, set up by human experts rather than being adaptive, the interest in machine learning approaches to this problem has increased in recent years. Reinforcement learning (RL) approaches are often used in these studies, as they require little pre-existing knowledge about traffic flows. Distributed constraint optimisation approaches (DCOP) have also been shown to be successful, but are limited to cases where the traffic flows are known. The distributed coordination of exploration and exploitation (DCEE) framework was recently proposed to introduce learning in the DCOP framework. In this paper, we present a study of DCEE and RL techniques in a complex simulator, illustrating the particular advantages of each, comparing them against standard isolated traffic actuated signals. We analyse how learning and coordination behave under different traffic conditions, and discuss the multi-objective nature of the problem. Finally we evaluate several alternative reward signals in the best performing approach, some of these taking advantage of the correlation between the problem-inherent objectives to improve performance. },
}

Generated by bib2html.pl (written by Patrick Riley ) on Thu Jul 24, 2014 16:09:10