Abstract
To better understand, search, and classify image and video information, many visual feature descriptors have been proposed to describe elementary visual characteristics, such as the shape, the color, the texture, etc. How to integrate these heterogeneous visual features and identify the important ones from them for specific vision tasks has become an increasingly critical problem. In this paper, We propose a novel Sparse Multimodal Learning (SMML) approach to integrate such heterogeneous features by using the joint structured sparsity regularizations to learn the feature importance of for the vision tasks from both group-wise and individual point of views. A new optimization algorithm is also introduced to solve the non-smooth objective with rigorously proved global convergence. We applied our SMML method to five broadly used object categorization and scene understanding image data sets for both single-label and multi-label image classification tasks. For each data set we integrate six different types of popularly used image features. Compared to existing scene and object categorization methods using either single modality or multi-modalities of features, our approach always achieves better performances measured.
Original language | English |
---|---|
Article number | 6619242 |
Pages (from-to) | 3097-3102 |
Number of pages | 6 |
Journal | Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition |
DOIs | |
State | Published - 2013 |
Externally published | Yes |
Event | 26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2013 - Portland, OR, United States Duration: 23 Jun 2013 → 28 Jun 2013 |
Keywords
- Data Integration
- Structured Sparsity
- Visual Features Fusion