TY - JOUR
T1 - A unified combination scheme for online learning and distributed optimization over networks
AU - Jin, Danqi
AU - Chen, Yitong
AU - Chen, Jie
AU - Huang, Gongping
N1 - Publisher Copyright:
© 2025 Elsevier Inc.
PY - 2025/4
Y1 - 2025/4
N2 - Both convex and affine combinations are highly effective for distributed adaptive networks, enabling these networks to create new diffusion strategies by combining the strengths of candidate diffusion strategies. However, these schemes are typically designed for mean-square error costs and linear models, and all nodes in a network are constrained to use the same scheme. To overcome the limitations of current combination schemes, we propose a novel unified combination scheme that accommodates possibly nonlinear models and general convex cost functions. This scheme also unifies convex and affine combination schemes, allowing nodes within the same network to have different choices. Our unified scheme is flexible enough to accommodate an arbitrary number of candidate algorithms, and allows for the independent and flexible setting of criteria for deriving each candidate algorithm as well as for the combination layer. To further enhance its performance, we introduce a weight-transfer trick among multiple candidate strategies. Finally, simulation results validate the effectiveness of our proposed scheme and provide guidance on the selection of its step-size parameter.
AB - Both convex and affine combinations are highly effective for distributed adaptive networks, enabling these networks to create new diffusion strategies by combining the strengths of candidate diffusion strategies. However, these schemes are typically designed for mean-square error costs and linear models, and all nodes in a network are constrained to use the same scheme. To overcome the limitations of current combination schemes, we propose a novel unified combination scheme that accommodates possibly nonlinear models and general convex cost functions. This scheme also unifies convex and affine combination schemes, allowing nodes within the same network to have different choices. Our unified scheme is flexible enough to accommodate an arbitrary number of candidate algorithms, and allows for the independent and flexible setting of criteria for deriving each candidate algorithm as well as for the combination layer. To further enhance its performance, we introduce a weight-transfer trick among multiple candidate strategies. Finally, simulation results validate the effectiveness of our proposed scheme and provide guidance on the selection of its step-size parameter.
KW - Diffusion strategy
KW - Distributed optimization
KW - General cost function
KW - Online learning
KW - Unified combination scheme
UR - http://www.scopus.com/inward/record.url?scp=85214286313&partnerID=8YFLogxK
U2 - 10.1016/j.dsp.2024.104970
DO - 10.1016/j.dsp.2024.104970
M3 - 文章
AN - SCOPUS:85214286313
SN - 1051-2004
VL - 159
JO - Digital Signal Processing: A Review Journal
JF - Digital Signal Processing: A Review Journal
M1 - 104970
ER -