Visual question answering model based on visual relationship detection

Yuling Xi, Yanning Zhang, Songtao Ding, Shaohua Wan

科研成果: 期刊稿件文章同行评审

87 引用 (Scopus)

摘要

visual question answering (VQA) is a learning task involving two major fields of computer vision and natural language processing. The development of deep learning technology has contributed to the advancement of this research area. Although the research on the question answering model has made great progress, the low accuracy of the VQA model is mainly because the current question answering model structure is relatively simple, the attention mechanism of model is deviated from human attention and lacks a higher level of logical reasoning ability. In response to the above problems, we propose a VQA model based on multi-objective visual relationship detection. Firstly, the appearance feature is used to replace the image features from the original object, and the appearance model is extended by the principle of word vector similarity. The appearance features and relationship predicates are then fed into the word vector space and represented by a fixed length vector. Finally, through the concatenation of elements between the image feature and the question vector are fed into the classifier to generate an output answer. Our method is benchmarked on the DQAUAR data set, and evaluated by the Acc WUPS@0.0 and WUPS@0.9.

源语言英语
文章编号115648
期刊Signal Processing: Image Communication
80
DOI
出版状态已出版 - 2月 2020

指纹

探究 'Visual question answering model based on visual relationship detection' 的科研主题。它们共同构成独一无二的指纹。

引用此