TY - JOUR
T1 - Fast-Convergent Fully Connected Deep Learning Model Using Constrained Nodes Input
AU - Ding, Chen
AU - Li, Ying
AU - Zhang, Lei
AU - Zhang, Jinyang
AU - Yang, Lu
AU - Wei, Wei
AU - Xia, Yong
AU - Zhang, Yanning
N1 - Publisher Copyright:
© 2018, Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2019/6/15
Y1 - 2019/6/15
N2 - Recently, deep learning models exhibit promising performance in various applications. However, most of them converge slowly due to gradient vanishing. To address this problem, we propose a fast convergent fully connected deep learning network in this study. Through constraining the input values of nodes on the fully connected layers, the proposed method is able to well mitigate the gradient vanishing problems in training phase, and thus greatly reduces the training iterations required to reach convergence. Nevertheless, the drop of generalization performance is negligible. Experimental results validate the effectiveness of the proposed method.
AB - Recently, deep learning models exhibit promising performance in various applications. However, most of them converge slowly due to gradient vanishing. To address this problem, we propose a fast convergent fully connected deep learning network in this study. Through constraining the input values of nodes on the fully connected layers, the proposed method is able to well mitigate the gradient vanishing problems in training phase, and thus greatly reduces the training iterations required to reach convergence. Nevertheless, the drop of generalization performance is negligible. Experimental results validate the effectiveness of the proposed method.
KW - Constrained input value of nodes
KW - Deep learning model
KW - Fast convergent method
UR - http://www.scopus.com/inward/record.url?scp=85048367833&partnerID=8YFLogxK
U2 - 10.1007/s11063-018-9872-y
DO - 10.1007/s11063-018-9872-y
M3 - 文章
AN - SCOPUS:85048367833
SN - 1370-4621
VL - 49
SP - 995
EP - 1005
JO - Neural Processing Letters
JF - Neural Processing Letters
IS - 3
ER -