eduzhai > Applied Sciences > Engineering >

An improved convergence analysis for decentralized online stochastic non-convex optimization

  • king
  • (0) Download
  • 20210507
  • Save

... pages left unread,continue reading

Document pages: 16 pages

Abstract: In this paper, we study decentralized online stochastic non-convexoptimization over a network of nodes. Integrating a technique called gradienttracking in decentralized stochastic gradient descent, we show that theresulting algorithm, GT-DSGD, enjoys certain desirable characteristics towardsminimizing a sum of smooth non-convex functions. In particular, for generalsmooth non-convex functions, we establish non-asymptotic characterizations ofGT-DSGD and derive the conditions under which it achieves network-independentperformances that match the centralized minibatch SGD. In contrast, theexisting results suggest that GT-DSGD is always network-dependent and istherefore strictly worse than the centralized minibatch SGD. When the globalnon-convex function additionally satisfies the Polyak-Lojasiewics (PL)condition, we establish the linear convergence of GT-DSGD up to a steady-stateerror with appropriate constant step-sizes. Moreover, under stochasticapproximation step-sizes, we establish, for the first time, the optimal globalsublinear convergence rate on almost every sample path, in addition to theasymptotically optimal sublinear rate in expectation. Since strongly convexfunctions are a special case of the functions satisfying the PL condition, ourresults are not only immediately applicable but also improve the currentlyknown best convergence rates and their dependence on problem parameters.

Please select stars to rate!

         

0 comments Sign in to leave a comment.

    Data loading, please wait...
×