TY - JOUR
T1 - Accelerated Dual Averaging Methods for Decentralized Constrained Optimization
AU - Liu, Changxin
AU - Shi, Yang
AU - Li, Huiping
AU - Du, Wenli
N1 - Publisher Copyright:
© 1963-2012 IEEE.
PY - 2023/4/1
Y1 - 2023/4/1
N2 - In this article, we study decentralized convex constrained optimization problems in networks. We focus on the dual averaging-based algorithmic framework that is well-documented to be superior in handling constraints and complex communication environments simultaneously. Two new decentralized dual averaging (DDA) algorithms are proposed. In the first one, a second-order dynamic average consensus protocol is tailored for DDA-type algorithms, which equips each agent with a provably more accurate estimate of the global dual variable than conventional schemes. We rigorously prove that the proposed algorithm attains O(1/t) convergence for general convex and smooth problems, for which existing DDA methods were only known to converge at O(1/√t) prior to our work. In the second one, we use the extrapolation technique to accelerate the convergence of DDA. Compared to existing accelerated algorithms, where typically two different variables are exchanged among agents at each time, the proposed algorithm only seeks consensus on local gradients. Then, the extrapolation is performed based on two sequences of primal variables, which are determined by the accumulations of gradients at two consecutive time instants, respectively. The algorithm is proved to converge at O(1)(1t2+1/t(1-β)2), where β denotes the second largest singular value of the mixing matrix. We remark that the condition for the algorithmic parameter to guarantee convergence does not rely on the spectrum of the mixing matrix, making itself easy to satisfy in practice. Finally, numerical results are presented to demonstrate the efficiency of the proposed methods.
AB - In this article, we study decentralized convex constrained optimization problems in networks. We focus on the dual averaging-based algorithmic framework that is well-documented to be superior in handling constraints and complex communication environments simultaneously. Two new decentralized dual averaging (DDA) algorithms are proposed. In the first one, a second-order dynamic average consensus protocol is tailored for DDA-type algorithms, which equips each agent with a provably more accurate estimate of the global dual variable than conventional schemes. We rigorously prove that the proposed algorithm attains O(1/t) convergence for general convex and smooth problems, for which existing DDA methods were only known to converge at O(1/√t) prior to our work. In the second one, we use the extrapolation technique to accelerate the convergence of DDA. Compared to existing accelerated algorithms, where typically two different variables are exchanged among agents at each time, the proposed algorithm only seeks consensus on local gradients. Then, the extrapolation is performed based on two sequences of primal variables, which are determined by the accumulations of gradients at two consecutive time instants, respectively. The algorithm is proved to converge at O(1)(1t2+1/t(1-β)2), where β denotes the second largest singular value of the mixing matrix. We remark that the condition for the algorithmic parameter to guarantee convergence does not rely on the spectrum of the mixing matrix, making itself easy to satisfy in practice. Finally, numerical results are presented to demonstrate the efficiency of the proposed methods.
KW - Acceleration
KW - constrained optimization
KW - decentralized optimization
KW - dual averaging
KW - multiagent system
UR - http://www.scopus.com/inward/record.url?scp=85132529951&partnerID=8YFLogxK
U2 - 10.1109/TAC.2022.3173062
DO - 10.1109/TAC.2022.3173062
M3 - 文章
AN - SCOPUS:85132529951
SN - 0018-9286
VL - 68
SP - 2125
EP - 2139
JO - IEEE Transactions on Automatic Control
JF - IEEE Transactions on Automatic Control
IS - 4
ER -