A cognitive brain model for multimodal sentiment analysis based on attention neural networks

Yuanqing Li, Ke Zhang, Jingyu Wang, Xinbo Gao

科研成果: 期刊稿件文章同行评审

45 引用 (Scopus)

摘要

Multimodal sentiment analysis is one of the most attractive interdisciplinary research topics in artificial intelligence (AI). Different from other classification issues, multimodal sentiment analysis of human is a much finer classification problem. However, most current work accept all multimodalities as the input together and then output final results at one time after fusion and decision processes. Rare models try to divide their models into more than one fusion modules with different fusion strategies for better adaption of different tasks. Additionally, most recent multimodal sentiment analysis methods pay great focuses on binary classification, but the accuracy of multi-classification still remains difficult to improve. Inspired by the emotional processing procedure in cognitive science, both binary and multi-classification abilities are improved in our method by dividing the complicated problem into smaller issues which are easier to be handled. In this paper, we propose a Hierarchal Attention-BiLSTM (Bidirectional Long-Short Term Memory) model based on Cognitive Brain limbic system (HALCB). HALCB splits the multimodal sentiment analysis into two modules responsible for two tasks, the binary classification and the multi-classification. The former module divides the input items into two categories by recognizing their polarity and then sends them to the latter module separately. In this module, Hash algorithm is utilized to improve the retrieve accuracy and speed. Correspondingly, the latter module contains a positive sub-net dedicated for positive inputs and a negative sub-nets dedicated for negative inputs. Each of these binary module and two sub-nets in multi-classification module possesses different fusion strategy and decision layer for matching its respective function. We also add a random forest at the final link to collect outputs from all modules and fuse them at the decision-level at last. Experiments are conducted on three datasets and compare the results with baselines on both binary classification and multi-classification tasks. Our experimental results surpass the state-of-the-art multimodal sentiment analysis methods on both binary and multi-classification by a big margin.

源语言英语
页(从-至)159-173
页数15
期刊Neurocomputing
430
DOI
出版状态已出版 - 21 3月 2021

指纹

探究 'A cognitive brain model for multimodal sentiment analysis based on attention neural networks' 的科研主题。它们共同构成独一无二的指纹。

引用此