Fusion of heterogeneous attention mechanisms in multi-view convolutional neural network for text classification

Yunji Liang, Huihui Li, Bin Guo, Zhiwen Yu, Xiaolong Zheng, Sagar Samtani, Daniel D. Zeng

科研成果: 期刊稿件文章同行评审

66 引用 (Scopus)

摘要

The rapid proliferation of user generated content has given rise to large volumes of text corpora. Increasingly, scholars, researchers, and organizations employ text classification to mine novel insights for high-impact applications. Despite their prevalence, conventional text classification methods rely on labor-intensive feature engineering efforts that are task specific, omit long-term relationships, and are not suitable for the rapidly evolving domains. While an increasing body of deep learning and attention mechanism literature aim to address these issues, extant methods often represent text as a single view and omit multiple sets of features at varying levels of granularity. Recognizing that these issues often result in performance degradations, we propose a novel Spatial View Attention Convolutional Neural Network (SVA-CNN). SVA-CNN leverages an innovative and carefully designed set of multi-view representation learning, a combination of heterogeneous attention mechanisms and CNN-based operations to automatically extract and weight multiple granularities and fine-grained representations. Rigorously evaluating SVA-CNN against prevailing text classification methods on five large-scale benchmark datasets indicates its ability to outperform extant deep learning-based classification methods in both performance and training time for document classification, sentiment analysis, and thematic identification applications. To facilitate model reproducibility and extensions, SVA-CNN's source code is also available via GitHub.

源语言英语
页(从-至)295-312
页数18
期刊Information Sciences
548
DOI
出版状态已出版 - 16 2月 2021

指纹

探究 'Fusion of heterogeneous attention mechanisms in multi-view convolutional neural network for text classification' 的科研主题。它们共同构成独一无二的指纹。

引用此