Explaining Neural Networks: Hierarchical Backpropagated Ensemble Learning

Research output: Contribution to journalArticlepeer-review

Abstract

Deep models, characterized by complex structures and end-to-end optimization, proved effective in providing decision support based on real-world data. However, the lack of transparency in their decision-making process and the difficulty in interpreting the role of individual neurons limited their practical applicability in many critical and sensitive domains. Inspired by the parallels between neural networks and ensemble models, where performance was achieved through the collaboration of multiple weak learners, this article presents a novel perspective that reframes neural networks as hierarchical ensembles. We propose the hierarchical backpropagated ensemble (HBE) model, wherein each neuron functions both as a base learner and as part of an ensemble of preceding neurons. This framework applies ensemble learning techniques to neural networks, allowing each neuron to focus on specific subtasks while progressively constructing a network that meets global objectives. Experimental results on real-world data show that this hierarchical structure enhances the effectiveness of traditional ensemble models, and the ensemble-based explanations offer improved initialization and dynamically adjustable network structures, leading to more efficient training.

Keywords

  • Back propagation
  • boosting
  • ensemble learning
  • machine learning
  • neural network
  • stacking

Fingerprint

Dive into the research topics of 'Explaining Neural Networks: Hierarchical Backpropagated Ensemble Learning'. Together they form a unique fingerprint.

Cite this