ModalNet: an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network

Zhe Zhang, Zhu Wang, Xiaona Li, Nannan Liu, Bin Guo, Zhiwen Yu

科研成果: 期刊稿件文章同行评审

44 引用 (Scopus)

摘要

Aspect-level sentiment classification aims to identify sentiment polarity over each aspect of a sentence. In the past, such analysis tasks mainly relied on text data. Nowadays, due to the popularization of smart devices and Internet services, people are generating more abundant data, including text, image, video, et al. Multimodal data from the same post (e.g., a tweet) usually has certain correlation. For example, image data might has an auxiliary effect on the text data, and reasonable processing of such multimodal data can help obtain much richer information for sentiment analysis. To this end, we propose an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network. Specifically, we first leverage two memory networks for mining the intra-modality information of text and image, and then design a discriminant matrix to supervise the fusion of inter-modality information. Experimental results demonstrate the effectiveness of the proposed model.

源语言英语
页(从-至)1957-1974
页数18
期刊World Wide Web
24
6
DOI
出版状态已出版 - 11月 2021

指纹

探究 'ModalNet: an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network' 的科研主题。它们共同构成独一无二的指纹。

引用此