eduzhai > Applied Sciences > Engineering >

Online Stochastic Convex Optimization Wasserstein Distance Variation

  • Save

... pages left unread,continue reading

Document pages: 15 pages

Abstract: Distributionally-robust optimization is often studied for a fixed set ofdistributions rather than time-varying distributions that can driftsignificantly over time (which is, for instance, the case in finance andsociology due to underlying expansion of economy and evolution ofdemographics). This motivates understanding conditions on probabilitydistributions, using the Wasserstein distance, that can be used to modeltime-varying environments. We can then use these conditions in conjunction withonline stochastic optimization to adapt the decisions. We considers an onlineproximal-gradient method to track the minimizers of expectations of smoothconvex functions parameterised by a random variable whose probabilitydistributions continuously evolve over time at a rate similar to that of therate at which the decision maker acts. We revisit the concepts of estimationand tracking error inspired by systems and control literature and providebounds for them under strong convexity, Lipschitzness of the gradient, andbounds on the probability distribution drift. Further, noting that computingprojections for a general feasible sets might not be amenable to onlineimplementation (due to computational constraints), we propose an exact penaltymethod. Doing so allows us to relax the uniform boundedness of the gradient andestablish dynamic regret bounds for tracking and estimation error. We furtherintroduce a constraint-tightening approach and relate the amount of tighteningto the probability of satisfying the constraints.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×