摘要
In a distributed network,it is important yet challenging to estimate model parameters from noisy streaming data. To cope with this problem,diffusion strategies have been intensively studied over the past decade,which attempt to estimate the model parameters cooperatively by authorizing a local exchange of data within neighboring nodes in a self-organized manner. Within the literature of diffusion strategies,the earliest such effort can trace back to the so-called single-task problem,in which all the nodes in a distributed network estimate the same parameter. In real applications,however,different nodes may have to estimate different but related parameters. By taking these relations into consider⁃ ation,the more general and flexible multitask problem has been proposed recently. The keys to solving multitask problems include a proper modelling and formulation of the real problem,as well as designing effective diffusion strategies to achieve a useful solution. Over the recent years,a great amount of multitask diffusion strategies have been proposed. In this paper,we discuss the basic principles underlying those multitask diffusion strategies,covering the topics of both the unsupervised learning and the supervised learning,which are two dominant routines of designing multitask diffusion strate⁃ gies. Specifically,on the one hand,for multitask diffusion strategies based on the unsupervised learning,we introduce some details on the selection of both static combination matrices and adaptive combination matrices,along with a novel combination scheme for two combination matrices. The differences among them are further highlighted,so as to make them clear for the interested reader. On the other hand,for multitask diffusion strategies based on the supervised learning,the so-called projected gradient descent method and several regularization based algorithms have been discussed,as well as several typical references about these algorithms are introduced. In addition,to make the above algorithms to be clear,several examples are given at the end of each algorithm. This paper mainly focus on the principles and structures of each algorithm,and due to space limit,we will not present much detail on the derivation of every individual algorithm. The in⁃ terested reader is therefore encouraged to follow the respective references listed in the paper.
投稿的翻译标题 | A Review of Multitask Diffusion Strategies in Distributed Networks |
---|---|
源语言 | 繁体中文 |
页(从-至) | 1901-1918 |
页数 | 18 |
期刊 | Journal of Signal Processing |
卷 | 39 |
期 | 11 |
DOI | |
出版状态 | 已出版 - 11月 2023 |
关键词
- adaptive filtering
- diffusion strategy
- multitask
- parameter estimation
- supervised learning
- unsupervised learning