FENet: A Feature Explanation Network with a Hierarchical Interpretable Architecture for Intelligent Decision-Making

Chenfeng Wang, Xiaoguang Gao, Xinyu Li, Bo Li, Kaifang Wan

科研成果: 期刊稿件文章同行评审

1 引用 (Scopus)

摘要

As an increasing number of intelligent decision-making problems of vehicles are addressed using implementations of deep learning (DL) methods, the interpretability of intelligent decision-making strongly determines the degree of human trust in the technology and implementation process; therefore, it is urgently necessary to improve the interpretability of DL. To enhance the interpretability of intelligent decision-making processes based on DL, we propose a feature explanation network (FENet), which consists of a correlation clustering module, a feature extraction module, and a decision module. First, to determine the source of each feature before the feature extraction module, input variables are clustered according to correlation. We design dynamic strategy and Dset strategy, the former strategy comprises four specific implementations corresponding to different parameter selections and noise processing methods. Then, in the feature extraction module, any DL model can be employed to extract features from each correlation cluster separately. Finally, a probabilistic graphical model-Bayesian network is constructed based on the features, and the intelligent decision is realized by Bayesian inference. In the experiments, we first design five indicators to quantify the interpretability and verify the effectiveness of each module in FENet by an ablation study. Compared with other DL models, FENet demonstrates significant improvements in interpretability while maintaining comparable decision accuracy. Moreover, when applied to the practical problem of UAV intrusion detection, FENet can provide an explainable process and interpretation of final results. These findings indicate that the proposed network architecture can be effectively utilized with various DL-based algorithms for decision-making problems to enhance interpretability.

源语言英语
页(从-至)1-19
页数19
期刊IEEE Transactions on Intelligent Vehicles
DOI
出版状态已接受/待刊 - 2023

指纹

探究 'FENet: A Feature Explanation Network with a Hierarchical Interpretable Architecture for Intelligent Decision-Making' 的科研主题。它们共同构成独一无二的指纹。

引用此