An Inverse-Free and Scalable Sparse Bayesian Extreme Learning Machine for Classification Problems

Jiahua Luo, Chi Man Vong, Zhenbao Liu, Chuangquan Chen

科研成果: 期刊稿件文章同行评审

2 引用 (Scopus)

摘要

Sparse Bayesian Extreme Learning Machine (SBELM) constructs an extremely sparse and probabilistic model with low computational cost and high generalization. However, the update rule of hyperparameters (ARD prior) in SBELM involves using the diagonal elements from the inversion of the covariance matrix with the full training dataset, which raises the following two issues. Firstly, inverting the Hessian matrix may suffer ill-conditioning issues in some cases, which hinders SBELM from converging. Secondly, it may result in the memory-overflow issue with computational memory O(L3) (L : number of hidden nodes) to invert the big covariance matrix for updating the ARD priors. To address these issues, an inverse-free SBELM called QN-SBELM is proposed in this paper, which integrates the gradient-based Quasi-Newton (QN) method into SBELM to approximate the inverse covariance matrix. It takes O(L2) computational complexity and is simultaneously scalable to large problems. QN-SBELM was evaluated on benchmark datasets of different sizes. Experimental results verify that QN-SBELM achieves more accurate results than SBELM with a sparser model, and also provides more stable solutions and a great extension to large-scale problems.

源语言英语
文章编号9458951
页(从-至)87543-87551
页数9
期刊IEEE Access
9
DOI
出版状态已出版 - 2021

指纹

探究 'An Inverse-Free and Scalable Sparse Bayesian Extreme Learning Machine for Classification Problems' 的科研主题。它们共同构成独一无二的指纹。

引用此