ModalNet: an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network

Zhe Zhang, Zhu Wang, Xiaona Li, Nannan Liu, Bin Guo, Zhiwen Yu

Research output: Contribution to journalArticlepeer-review

44 Scopus citations

Abstract

Aspect-level sentiment classification aims to identify sentiment polarity over each aspect of a sentence. In the past, such analysis tasks mainly relied on text data. Nowadays, due to the popularization of smart devices and Internet services, people are generating more abundant data, including text, image, video, et al. Multimodal data from the same post (e.g., a tweet) usually has certain correlation. For example, image data might has an auxiliary effect on the text data, and reasonable processing of such multimodal data can help obtain much richer information for sentiment analysis. To this end, we propose an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network. Specifically, we first leverage two memory networks for mining the intra-modality information of text and image, and then design a discriminant matrix to supervise the fusion of inter-modality information. Experimental results demonstrate the effectiveness of the proposed model.

Original languageEnglish
Pages (from-to)1957-1974
Number of pages18
JournalWorld Wide Web
Volume24
Issue number6
DOIs
StatePublished - Nov 2021

Keywords

  • Aspect-level sentiment classification
  • Discriminant attention network
  • Feature fusion
  • Multimodal data

Fingerprint

Dive into the research topics of 'ModalNet: an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network'. Together they form a unique fingerprint.

Cite this