An explainable ensemble feedforward method with Gaussian convolutional filter

Jingchen Li, Haobin Shi, Kao Shing Hwang

Research output: Contribution to journalArticlepeer-review

29 Scopus citations

Abstract

The emerging deep learning technologies are leading to a new wave of artificial intelligence, but in some critical applications such as medical image processing, deep learning is inapplicable due to the lack of interpretation, which is essential for a critical application. This work develops an explainable feedforward model with Gaussian kernels, in which the Gaussian mixture model is leveraged to extract representative features. To make the error within the allowable range, we calculate the lower bound of the number of samples through the Chebyshev inequality. In the training processing, we discuss both the deterministic and stochastic feature representations, and investigate the performance of them and the ensemble model. Additionally, we use Shapely additive explanations to analyze the experiment results. The proposed method is interpretable, so it can replace the deep neural network by working with shallow machine learning technologies, such as the Support Vector Machine and Random Forest. We compare our method with baseline methods on Brain Tumor and Mitosis dataset. The experimental results show our method outperforms the RAM (Recurrent Attention Model), VGG19 (Visual Geometry Group 19), LeNET-5, and Explainable Prediction Framework while having strong interpretability.

Original languageEnglish
Article number107103
JournalKnowledge-Based Systems
Volume225
DOIs
StatePublished - 5 Aug 2021

Keywords

  • Explainable artificial intelligence
  • Medical image processing
  • Shapely additive explanation

Fingerprint

Dive into the research topics of 'An explainable ensemble feedforward method with Gaussian convolutional filter'. Together they form a unique fingerprint.

Cite this