Research on RBM Networks Training Based on Improved Parallel Tempering Algorithm

Fei Li, Xiao Guang Gao, Kai Fang Wan

Research output: Contribution to journalArticlepeer-review

4 Scopus citations

Abstract

Currently, most algorithms for training restricted Boltzmann machines (RBMs) are based on multi-step Gibbs sampling. When the sampling algorithm is used to calculate gradient, the sampling gradient is an approximate value of the true gradient, and there is a big error between the sampling gradient and the true gradient, which seriously affects training effect of network. This article focuses on the problems mentioned above. Firstly, numerical error and direction error between gradient and true gradient sampling are analyzed, as well as their influences on the performance of network training. The problems are theoretically analyzed from the angle of Markov sampling. Then a gradient modification model is established to adjust the numerical value and direction of sampling gradient. Furthermore, improved tempering learning based algorithm is put forward, that is, GFPT (Gradient fixing parallel tempering) algorithm. Finally, a comparative experiment on the GFPT algorithm and existing algorithms is given. It demonstrated that GFPT algorithm can greatly reduce the sampling error between sampling gradient and true gradient, and improve RBM network training precision.

Original languageEnglish
Pages (from-to)753-764
Number of pages12
JournalZidonghua Xuebao/Acta Automatica Sinica
Volume43
Issue number5
DOIs
StatePublished - May 2017

Keywords

  • Deep learning
  • GFPT (Gradient fixing parallel tempering)
  • Markov theory
  • Parallel tempering
  • Restricted Boltzmann machine (RBM)
  • Sampling algorithm

Fingerprint

Dive into the research topics of 'Research on RBM Networks Training Based on Improved Parallel Tempering Algorithm'. Together they form a unique fingerprint.

Cite this