eduzhai > Applied Sciences > Engineering >

Area-wide traffic signal control based on a deep graph Q-Network (DGQN) trained in an asynchronous manner

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 34 pages

Abstract: Reinforcement learning (RL) algorithms have been widely applied in trafficsignal studies. There are, however, several problems in jointly controllingtraffic lights for a large transportation network. First, the action spaceexponentially explodes as the number of intersections to be jointly controlledincreases. Although a multi-agent RL algorithm has been used to solve the curseof dimensionality, this neither guaranteed a global optimum, nor could it breakthe ties between joint actions. The problem was circumvented by revising theoutput structure of a deep Q-network (DQN) within the framework of asingle-agent RL algorithm. Second, when mapping traffic states into an actionvalue, it is difficult to consider spatio-temporal correlations over a largetransportation network. A deep graph Q-network (DGQN) was devised toefficiently accommodate spatio-temporal dependencies on a large scale. Finally,training a RL model to jointly control traffic lights in a large transportationnetwork requires much time to converge. An asynchronous update methodology wasdevised for a DGQN to quickly reach an optimal policy. Using these threeremedies, a DGQN succeeded in jointly controlling the traffic lights in a largetransportation network in Seoul. This approach outperformed otherstate-of-the-art RL algorithms as well as an actual fixed-signal operation.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×