TY - JOUR
T1 - FENet
T2 - A Feature Explanation Network with a Hierarchical Interpretable Architecture for Intelligent Decision-Making
AU - Wang, Chenfeng
AU - Gao, Xiaoguang
AU - Li, Xinyu
AU - Li, Bo
AU - Wan, Kaifang
N1 - Publisher Copyright:
IEEE
PY - 2023
Y1 - 2023
N2 - As an increasing number of intelligent decision-making problems of vehicles are addressed using implementations of deep learning (DL) methods, the interpretability of intelligent decision-making strongly determines the degree of human trust in the technology and implementation process; therefore, it is urgently necessary to improve the interpretability of DL. To enhance the interpretability of intelligent decision-making processes based on DL, we propose a feature explanation network (FENet), which consists of a correlation clustering module, a feature extraction module, and a decision module. First, to determine the source of each feature before the feature extraction module, input variables are clustered according to correlation. We design dynamic strategy and Dset strategy, the former strategy comprises four specific implementations corresponding to different parameter selections and noise processing methods. Then, in the feature extraction module, any DL model can be employed to extract features from each correlation cluster separately. Finally, a probabilistic graphical model-Bayesian network is constructed based on the features, and the intelligent decision is realized by Bayesian inference. In the experiments, we first design five indicators to quantify the interpretability and verify the effectiveness of each module in FENet by an ablation study. Compared with other DL models, FENet demonstrates significant improvements in interpretability while maintaining comparable decision accuracy. Moreover, when applied to the practical problem of UAV intrusion detection, FENet can provide an explainable process and interpretation of final results. These findings indicate that the proposed network architecture can be effectively utilized with various DL-based algorithms for decision-making problems to enhance interpretability.
AB - As an increasing number of intelligent decision-making problems of vehicles are addressed using implementations of deep learning (DL) methods, the interpretability of intelligent decision-making strongly determines the degree of human trust in the technology and implementation process; therefore, it is urgently necessary to improve the interpretability of DL. To enhance the interpretability of intelligent decision-making processes based on DL, we propose a feature explanation network (FENet), which consists of a correlation clustering module, a feature extraction module, and a decision module. First, to determine the source of each feature before the feature extraction module, input variables are clustered according to correlation. We design dynamic strategy and Dset strategy, the former strategy comprises four specific implementations corresponding to different parameter selections and noise processing methods. Then, in the feature extraction module, any DL model can be employed to extract features from each correlation cluster separately. Finally, a probabilistic graphical model-Bayesian network is constructed based on the features, and the intelligent decision is realized by Bayesian inference. In the experiments, we first design five indicators to quantify the interpretability and verify the effectiveness of each module in FENet by an ablation study. Compared with other DL models, FENet demonstrates significant improvements in interpretability while maintaining comparable decision accuracy. Moreover, when applied to the practical problem of UAV intrusion detection, FENet can provide an explainable process and interpretation of final results. These findings indicate that the proposed network architecture can be effectively utilized with various DL-based algorithms for decision-making problems to enhance interpretability.
KW - Bayesian network
KW - Computational modeling
KW - Correlation
KW - Correlation clustering
KW - Data models
KW - Decision making
KW - Deep learning
KW - Deep learning
KW - Feature extraction
KW - Intelligent decision-making
KW - Interpretability
KW - Neural networks
UR - http://www.scopus.com/inward/record.url?scp=85174820385&partnerID=8YFLogxK
U2 - 10.1109/TIV.2023.3325553
DO - 10.1109/TIV.2023.3325553
M3 - 文章
AN - SCOPUS:85174820385
SN - 2379-8858
SP - 1
EP - 19
JO - IEEE Transactions on Intelligent Vehicles
JF - IEEE Transactions on Intelligent Vehicles
ER -