TY - GEN
T1 - Joint sparse learning for classification ensemble
AU - Li, Lin
AU - Tong, Fei
AU - Stolkin, Rustam
AU - Hu, Jinwen
AU - Yang, Feng
N1 - Publisher Copyright:
© 2017 IEEE.
PY - 2017/8/4
Y1 - 2017/8/4
N2 - Ensemble methods use multiple classifiers to achieve better decisions than could be achieved using any of the constituent classifiers alone. However, both theoretical and experimental evidence have shown that very large ensembles are not necessarily superior, and small ensembles can often achieve better results. In this paper, we show how to combine a set of weak classifiers into a robust ensemble by using a joint sparse representation method, which assigns a sparse coefficient vector to the decision of each classifier. The sparse vector contains many zero entries, and thus the final ensemble only employs a small number of classifiers, corresponding to non-zero entries. Training data are partitioned into several sub-groups to generate sub-underdetermined systems. The joint sparse method enables these sub-groups to then share their information about individual classifiers, to obtain an improved overall classification. Partitioning the training dataset into subgroups makes the proposed joint sparse ensemble method parallelizable, making it suitable for large scale problems. In contrast, previous work on sparse approaches to ensemble learning was limited to datasets smaller than the number of classifiers. Two different strategies are described for generating the sub-underdetermined systems, and experiments show these to be effective when tested with two different data manipulation methods. Experiments evaluate the performance of the joint sparse ensemble learning method in comparison to five other state-of-the-art methods from the literature, each designed to train small and efficient ensembles. Results suggest that joint sparse ensemble learning outperforms other algorithms on most datasets.
AB - Ensemble methods use multiple classifiers to achieve better decisions than could be achieved using any of the constituent classifiers alone. However, both theoretical and experimental evidence have shown that very large ensembles are not necessarily superior, and small ensembles can often achieve better results. In this paper, we show how to combine a set of weak classifiers into a robust ensemble by using a joint sparse representation method, which assigns a sparse coefficient vector to the decision of each classifier. The sparse vector contains many zero entries, and thus the final ensemble only employs a small number of classifiers, corresponding to non-zero entries. Training data are partitioned into several sub-groups to generate sub-underdetermined systems. The joint sparse method enables these sub-groups to then share their information about individual classifiers, to obtain an improved overall classification. Partitioning the training dataset into subgroups makes the proposed joint sparse ensemble method parallelizable, making it suitable for large scale problems. In contrast, previous work on sparse approaches to ensemble learning was limited to datasets smaller than the number of classifiers. Two different strategies are described for generating the sub-underdetermined systems, and experiments show these to be effective when tested with two different data manipulation methods. Experiments evaluate the performance of the joint sparse ensemble learning method in comparison to five other state-of-the-art methods from the literature, each designed to train small and efficient ensembles. Results suggest that joint sparse ensemble learning outperforms other algorithms on most datasets.
UR - http://www.scopus.com/inward/record.url?scp=85029901108&partnerID=8YFLogxK
U2 - 10.1109/ICCA.2017.8003205
DO - 10.1109/ICCA.2017.8003205
M3 - 会议稿件
AN - SCOPUS:85029901108
T3 - IEEE International Conference on Control and Automation, ICCA
SP - 1043
EP - 1048
BT - 2017 13th IEEE International Conference on Control and Automation, ICCA 2017
PB - IEEE Computer Society
T2 - 13th IEEE International Conference on Control and Automation, ICCA 2017
Y2 - 3 July 2017 through 6 July 2017
ER -