Proximal Multitask Learning over Distributed Networks with Jointly Sparse Structure

Danqi Jin, Jie Chen, Cedric Richard, Jingdong Chen

Research output: Chapter in Book/Report/Conference proceedingConference contributionpeer-review

6 Scopus citations

Abstract

Modeling relations between local optimum parameter vectors in multitask networks has attracted much attention over the last years. This work considers a distributed optimization problem for parameter vectors with a jointly sparse structure among nodes, that is, the parameter vectors share the same support set. By introducing an ℓ∞,1-norm penalty at each node, and using a proximal gradient method to minimize the regularized cost, we devise a proximal multitask diffusion LMS algorithm which promotes the joint-sparsity to enhance the estimation performance. Analyses are provided to ensure the stability. Simulation results are presented to highlight the performance.

Original languageEnglish
Title of host publication2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages5900-5904
Number of pages5
ISBN (Electronic)9781509066315
DOIs
StatePublished - May 2020
Event2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020 - Barcelona, Spain
Duration: 4 May 20208 May 2020

Publication series

NameICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings
Volume2020-May
ISSN (Print)1520-6149

Conference

Conference2020 IEEE International Conference on Acoustics, Speech, and Signal Processing, ICASSP 2020
Country/TerritorySpain
CityBarcelona
Period4/05/208/05/20

Keywords

  • diffusion strategy
  • Distributed optimization
  • joint sparsity
  • proximal operator
  • ℓ-norm regularization

Fingerprint

Dive into the research topics of 'Proximal Multitask Learning over Distributed Networks with Jointly Sparse Structure'. Together they form a unique fingerprint.

Cite this