eduzhai > Applied Sciences > Engineering >

Decentralized Deep Reinforcement Learning for Network Level Traffic Signal Control

  • king
  • (0) Download
  • 20210505
  • Save

... pages left unread,continue reading

Document pages: 90 pages

Abstract: In this thesis, I propose a family of fully decentralized deep multi-agentreinforcement learning (MARL) algorithms to achieve high, real-time performancein network-level traffic signal control. In this approach, each intersection ismodeled as an agent that plays a Markovian Game against the other intersectionnodes in a traffic signal network modeled as an undirected graph, to approachthe optimal reduction in delay. Following Partially Observable Markov DecisionProcesses (POMDPs), there are 3 levels of communication schemes betweenadjacent learning agents: independent deep Q-leaning (IDQL), shared statesreinforcement learning (S2RL) and a shared states & rewards version ofS2RL--S2R2L. In these 3 variants of decentralized MARL schemes, individualagent trains its local deep Q network (DQN) separately, enhanced byconvergence-guaranteed techniques like double DQN, prioritized experiencereplay, multi-step bootstrapping, etc. To test the performance of the proposedthree MARL algorithms, a SUMO-based simulation platform is developed to mimicthe traffic evolution of the real world. Fed with random traffic demand betweenpermitted OD pairs, a 4x4 Manhattan-style grid network is set up as thetestbed, two different vehicle arrival rates are generated for model trainingand testing. The experiment results show that S2R2L has a quicker convergencerate and better convergent performance than IDQL and S2RL in the trainingprocess. Moreover, three MARL schemes all reveal exceptional generalizationabilities. Their testing results surpass the benchmark Max Pressure (MP)algorithm, under the criteria of average vehicle delay, network-level queuelength and fuel consumption rate. Notably, S2R2L has the best testingperformance of reducing 34.55 traffic delay and dissipating 10.91 queuelength compared with MP.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×