eduzhai > Applied Sciences > Engineering >

Chance Constrained Policy Optimization for Process Control and Optimization

  • king
  • (0) Download
  • 20210506
  • Save

... pages left unread,continue reading

Document pages: 12 pages

Abstract: Chemical process optimization and control are affected by 1) plant-modelmismatch, 2) process disturbances, and 3) constraints for safe operation.Reinforcement learning by policy optimization would be a natural way to solvethis due to its ability to address stochasticity, plant-model mismatch, anddirectly account for the effect of future uncertainty and its feedback in aproper closed-loop manner; all without the need of an inner optimization loop.One of the main reasons why reinforcement learning has not been considered forindustrial processes (or almost any engineering application) is that it lacks aframework to deal with safety critical constraints. Present algorithms forpolicy optimization use difficult-to-tune penalty parameters, fail to reliablysatisfy state constraints or present guarantees only in expectation. We proposea chance constrained policy optimization (CCPO) algorithm which guarantees thesatisfaction of joint chance constraints with a high probability - which iscrucial for safety critical tasks. This is achieved by the introduction ofconstraint tightening (backoffs), which are computed simultaneously with thefeedback policy. Backoffs are adjusted with Bayesian optimization using theempirical cumulative distribution function of the probabilistic constraints,and are therefore self-tuned. This results in a general methodology that can beimbued into present policy optimization algorithms to enable them to satisfyjoint chance constraints with high probability. We present case studies thatanalyze the performance of the proposed approach.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×