Unsupervised feature learning assisted visual sentiment analysis

Zuhe Li, Yangyu Fan, Fengqin Wang, Weihua Liu

科研成果: 期刊稿件文章同行评审

摘要

Visual sentiment analysis which aims to understand the emotion and sentiment in visual content has attracted more and more attention. In this paper, we propose a hybrid approach for visual sentiment concept classification with an unsupervised feature learning architecture called convolutional autoencoder. We first extract a representative set of unlabeled patches from the image dataset and discover useful features of these patches with sparse autoencoders. Then we use a convolutional neural network (CNN) to obtain feature activations on full images for sentiment concept classification. We also fine-tune the network with a progressive strategy in order to filter out noisy samples in the weakly labeled training data. Meanwhile, we use low-level visual features to classify visual sentiment concepts in a traditional manner. At last the classification results with unsupervised feature learning and that with traditional features are taken into account together with a fusion algorithm to make a final prediction. Extensive experiments on benchmark datasets reveal that the proposed approach can achieve better performance in visual sentiment analysis compared to its predecessors.

源语言英语
页(从-至)119-130
页数12
期刊International Journal of Multimedia and Ubiquitous Engineering
11
10
DOI
出版状态已出版 - 2016

指纹

探究 'Unsupervised feature learning assisted visual sentiment analysis' 的科研主题。它们共同构成独一无二的指纹。

引用此