TY - JOUR
T1 - ModalNet
T2 - an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network
AU - Zhang, Zhe
AU - Wang, Zhu
AU - Li, Xiaona
AU - Liu, Nannan
AU - Guo, Bin
AU - Yu, Zhiwen
N1 - Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.
PY - 2021/11
Y1 - 2021/11
N2 - Aspect-level sentiment classification aims to identify sentiment polarity over each aspect of a sentence. In the past, such analysis tasks mainly relied on text data. Nowadays, due to the popularization of smart devices and Internet services, people are generating more abundant data, including text, image, video, et al. Multimodal data from the same post (e.g., a tweet) usually has certain correlation. For example, image data might has an auxiliary effect on the text data, and reasonable processing of such multimodal data can help obtain much richer information for sentiment analysis. To this end, we propose an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network. Specifically, we first leverage two memory networks for mining the intra-modality information of text and image, and then design a discriminant matrix to supervise the fusion of inter-modality information. Experimental results demonstrate the effectiveness of the proposed model.
AB - Aspect-level sentiment classification aims to identify sentiment polarity over each aspect of a sentence. In the past, such analysis tasks mainly relied on text data. Nowadays, due to the popularization of smart devices and Internet services, people are generating more abundant data, including text, image, video, et al. Multimodal data from the same post (e.g., a tweet) usually has certain correlation. For example, image data might has an auxiliary effect on the text data, and reasonable processing of such multimodal data can help obtain much richer information for sentiment analysis. To this end, we propose an aspect-level sentiment classification model by exploring multimodal data with fusion discriminant attentional network. Specifically, we first leverage two memory networks for mining the intra-modality information of text and image, and then design a discriminant matrix to supervise the fusion of inter-modality information. Experimental results demonstrate the effectiveness of the proposed model.
KW - Aspect-level sentiment classification
KW - Discriminant attention network
KW - Feature fusion
KW - Multimodal data
UR - http://www.scopus.com/inward/record.url?scp=85115188436&partnerID=8YFLogxK
U2 - 10.1007/s11280-021-00955-7
DO - 10.1007/s11280-021-00955-7
M3 - 文章
AN - SCOPUS:85115188436
SN - 1386-145X
VL - 24
SP - 1957
EP - 1974
JO - World Wide Web
JF - World Wide Web
IS - 6
ER -