TY - JOUR
T1 - An explainable ensemble feedforward method with Gaussian convolutional filter
AU - Li, Jingchen
AU - Shi, Haobin
AU - Hwang, Kao Shing
N1 - Publisher Copyright:
© 2021 Elsevier B.V.
PY - 2021/8/5
Y1 - 2021/8/5
N2 - The emerging deep learning technologies are leading to a new wave of artificial intelligence, but in some critical applications such as medical image processing, deep learning is inapplicable due to the lack of interpretation, which is essential for a critical application. This work develops an explainable feedforward model with Gaussian kernels, in which the Gaussian mixture model is leveraged to extract representative features. To make the error within the allowable range, we calculate the lower bound of the number of samples through the Chebyshev inequality. In the training processing, we discuss both the deterministic and stochastic feature representations, and investigate the performance of them and the ensemble model. Additionally, we use Shapely additive explanations to analyze the experiment results. The proposed method is interpretable, so it can replace the deep neural network by working with shallow machine learning technologies, such as the Support Vector Machine and Random Forest. We compare our method with baseline methods on Brain Tumor and Mitosis dataset. The experimental results show our method outperforms the RAM (Recurrent Attention Model), VGG19 (Visual Geometry Group 19), LeNET-5, and Explainable Prediction Framework while having strong interpretability.
AB - The emerging deep learning technologies are leading to a new wave of artificial intelligence, but in some critical applications such as medical image processing, deep learning is inapplicable due to the lack of interpretation, which is essential for a critical application. This work develops an explainable feedforward model with Gaussian kernels, in which the Gaussian mixture model is leveraged to extract representative features. To make the error within the allowable range, we calculate the lower bound of the number of samples through the Chebyshev inequality. In the training processing, we discuss both the deterministic and stochastic feature representations, and investigate the performance of them and the ensemble model. Additionally, we use Shapely additive explanations to analyze the experiment results. The proposed method is interpretable, so it can replace the deep neural network by working with shallow machine learning technologies, such as the Support Vector Machine and Random Forest. We compare our method with baseline methods on Brain Tumor and Mitosis dataset. The experimental results show our method outperforms the RAM (Recurrent Attention Model), VGG19 (Visual Geometry Group 19), LeNET-5, and Explainable Prediction Framework while having strong interpretability.
KW - Explainable artificial intelligence
KW - Medical image processing
KW - Shapely additive explanation
UR - http://www.scopus.com/inward/record.url?scp=85105576859&partnerID=8YFLogxK
U2 - 10.1016/j.knosys.2021.107103
DO - 10.1016/j.knosys.2021.107103
M3 - 文章
AN - SCOPUS:85105576859
SN - 0950-7051
VL - 225
JO - Knowledge-Based Systems
JF - Knowledge-Based Systems
M1 - 107103
ER -