TY - GEN
T1 - Heterogeneous image features integration via multi-modal semi-supervised learning model
AU - Cai, Xiao
AU - Nie, Feiping
AU - Cai, Weidong
AU - Huang, Heng
PY - 2013
Y1 - 2013
N2 - Automatic image categorization has become increasingly important with the development of Internet and the growth in the size of image databases. Although the image categorization can be formulated as a typical multi-class classification problem, two major challenges have been raised by the real-world images. On one hand, though using more labeled training data may improve the prediction performance, obtaining the image labels is a time consuming as well as biased process. On the other hand, more and more visual descriptors have been proposed to describe objects and scenes appearing in images and different features describe different aspects of the visual characteristics. Therefore, how to integrate heterogeneous visual features to do the semi-supervised learning is crucial for categorizing large-scale image data. In this paper, we propose a novel approach to integrate heterogeneous features by performing multi-modal semi-supervised classification on unlabeled as well as unsegmented images. Considering each type of feature as one modality, taking advantage of the large amount of unlabeled data information, our new adaptive multi-modal semi-supervised classification (AMMSS) algorithm learns a commonly shared class indicator matrix and the weights for different modalities (image features) simultaneously.
AB - Automatic image categorization has become increasingly important with the development of Internet and the growth in the size of image databases. Although the image categorization can be formulated as a typical multi-class classification problem, two major challenges have been raised by the real-world images. On one hand, though using more labeled training data may improve the prediction performance, obtaining the image labels is a time consuming as well as biased process. On the other hand, more and more visual descriptors have been proposed to describe objects and scenes appearing in images and different features describe different aspects of the visual characteristics. Therefore, how to integrate heterogeneous visual features to do the semi-supervised learning is crucial for categorizing large-scale image data. In this paper, we propose a novel approach to integrate heterogeneous features by performing multi-modal semi-supervised classification on unlabeled as well as unsegmented images. Considering each type of feature as one modality, taking advantage of the large amount of unlabeled data information, our new adaptive multi-modal semi-supervised classification (AMMSS) algorithm learns a commonly shared class indicator matrix and the weights for different modalities (image features) simultaneously.
KW - Heterogeneous Data Integration
KW - Multi-Modal Feature Integration
KW - Semi-Supervised Learning
UR - http://www.scopus.com/inward/record.url?scp=84898782680&partnerID=8YFLogxK
U2 - 10.1109/ICCV.2013.218
DO - 10.1109/ICCV.2013.218
M3 - 会议稿件
AN - SCOPUS:84898782680
SN - 9781479928392
T3 - Proceedings of the IEEE International Conference on Computer Vision
SP - 1737
EP - 1744
BT - Proceedings - 2013 IEEE International Conference on Computer Vision, ICCV 2013
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 2013 14th IEEE International Conference on Computer Vision, ICCV 2013
Y2 - 1 December 2013 through 8 December 2013
ER -