eduzhai > Applied Sciences > Engineering >

Accelerated Deep Reinforcement Learning Based Load Shedding for Emergency Voltage Control

  • Save

... pages left unread,continue reading

Document pages: 12 pages

Abstract: Load shedding has been one of the most widely used and effective emergencycontrol approaches against voltage instability. With increased uncertaintiesand rapidly changing operational conditions in power systems, existing methodshave outstanding issues in terms of either speed, adaptiveness, or scalability.Deep reinforcement learning (DRL) was regarded and adopted as a promisingapproach for fast and adaptive grid stability control in recent years. However,existing DRL algorithms show two outstanding issues when being applied to powersystem control problems: 1) computational inefficiency that requires extensivetraining and tuning time; and 2) poor scalability making it difficult to scaleto high dimensional control problems. To overcome these issues, an acceleratedDRL algorithm named PARS was developed and tailored for power system voltagestability control via load shedding. PARS features high scalability and is easyto tune with only five main hyperparameters. The method was tested on both theIEEE 39-bus and IEEE 300-bus systems, and the latter is by far the largestscale for such a study. Test results show that, compared to other methodsincluding model-predictive control (MPC) and proximal policy optimization(PPO)methods, PARS shows better computational efficiency (faster convergence), morerobustness in learning, excellent scalability and generalization capability.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×