Abstract
Aspect-level sentiment classification aims to identify sentiment polarity over each aspect of a sentence. In the past, such analysis tasks mainly relied on text data. Nowadays, due to the popularization of smart devices and Internet services, people are generating more abundant data, including text, image, video, et al. Multimodal data from the same post (e.g., a tweet) usually has certain correlation. For example, image data might has an auxiliary effect on the text data, and reasonable processing of such multimodal data can help obtain much richer information for sentiment analysis. To this end, we propose an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network. Specifically, we first leverage two memory networks for mining the intra-modality information of text and image, and then design a discriminant matrix to supervise the fusion of inter-modality information. Experimental results demonstrate the effectiveness of the proposed model.
| Original language | English |
|---|---|
| Pages (from-to) | 1957-1974 |
| Number of pages | 18 |
| Journal | World Wide Web |
| Volume | 24 |
| Issue number | 6 |
| DOIs | |
| State | Published - Nov 2021 |
Keywords
- Aspect-level sentiment classification
- Discriminant attention network
- Feature fusion
- Multimodal data
Fingerprint
Dive into the research topics of 'ModalNet: an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver