摘要
Visual sentiment analysis which aims to understand the emotion and sentiment in visual content has attracted more and more attention. In this paper, we propose a hybrid approach for visual sentiment concept classification with an unsupervised feature learning architecture called convolutional autoencoder. We first extract a representative set of unlabeled patches from the image dataset and discover useful features of these patches with sparse autoencoders. Then we use a convolutional neural network (CNN) to obtain feature activations on full images for sentiment concept classification. We also fine-tune the network with a progressive strategy in order to filter out noisy samples in the weakly labeled training data. Meanwhile, we use low-level visual features to classify visual sentiment concepts in a traditional manner. At last the classification results with unsupervised feature learning and that with traditional features are taken into account together with a fusion algorithm to make a final prediction. Extensive experiments on benchmark datasets reveal that the proposed approach can achieve better performance in visual sentiment analysis compared to its predecessors.
源语言 | 英语 |
---|---|
页(从-至) | 119-130 |
页数 | 12 |
期刊 | International Journal of Multimedia and Ubiquitous Engineering |
卷 | 11 |
期 | 10 |
DOI | |
出版状态 | 已出版 - 2016 |