TY - JOUR
T1 - An expanded sparse Bayesian learning method for polynomial chaos expansion
AU - Zhou, Yicheng
AU - Lu, Zhenzhou
AU - Cheng, Kai
AU - Shi, Yan
N1 - Publisher Copyright:
© 2019 Elsevier Ltd
PY - 2019/8/1
Y1 - 2019/8/1
N2 - Polynomial chaos expansion (PCE) has been proven to be a powerful tool for developing surrogate models in various engineering fields for uncertainty quantification. The computational cost of full PCE is unaffordable due to the “curse of dimensionality” of the expansion coefficients. In this paper, an expanded sparse Bayesian learning method for sparse PCE is proposed. Firstly, basis polynomials of the full PCE are partitioned into significant terms and complementary non-significant terms. The parameterized priors with distinct variance are assigned to the candidates for the significant terms. Then, the dimensionality of the parameter space is equivalent to the assumed sparsity level of the PCE. Secondly, an approximate Kashyap information criterion (KIC) rule which achieves a balance between model simplicity and goodness of fit is derived for model selection. Finally, an automatic search algorithm is proposed by minimizing the KIC objective function and using the variance contribution of each term to the model output to select significant terms. To assess the performance of the proposed method, a detailed comparison is completed with several well-established techniques. The results show that the proposed method is able to identify the most significant PC contributions with superior efficiency and accuracy.
AB - Polynomial chaos expansion (PCE) has been proven to be a powerful tool for developing surrogate models in various engineering fields for uncertainty quantification. The computational cost of full PCE is unaffordable due to the “curse of dimensionality” of the expansion coefficients. In this paper, an expanded sparse Bayesian learning method for sparse PCE is proposed. Firstly, basis polynomials of the full PCE are partitioned into significant terms and complementary non-significant terms. The parameterized priors with distinct variance are assigned to the candidates for the significant terms. Then, the dimensionality of the parameter space is equivalent to the assumed sparsity level of the PCE. Secondly, an approximate Kashyap information criterion (KIC) rule which achieves a balance between model simplicity and goodness of fit is derived for model selection. Finally, an automatic search algorithm is proposed by minimizing the KIC objective function and using the variance contribution of each term to the model output to select significant terms. To assess the performance of the proposed method, a detailed comparison is completed with several well-established techniques. The results show that the proposed method is able to identify the most significant PC contributions with superior efficiency and accuracy.
KW - Kashyap information criterion
KW - Polynomial chaos expansion
KW - Sparse Bayesian learning
UR - http://www.scopus.com/inward/record.url?scp=85063761597&partnerID=8YFLogxK
U2 - 10.1016/j.ymssp.2019.03.032
DO - 10.1016/j.ymssp.2019.03.032
M3 - 文章
AN - SCOPUS:85063761597
SN - 0888-3270
VL - 128
SP - 153
EP - 171
JO - Mechanical Systems and Signal Processing
JF - Mechanical Systems and Signal Processing
ER -