Reweighted-Boosting: A Gradient-Based Boosting Optimization Framework

Guanxiong He, Zheng Wang, Liaoyuan Tang, Weizhong Yu, Feiping Nie, Xuelong Li

Research output: Contribution to journalArticlepeer-review

Abstract

Boosting is a well-established ensemble learning approach that aims to enhance overall performance by combining multiple weak learners with a linear combination structure. It operates on the principle of using new learners to compensate for the shortcomings of previous learners and is known for its ability to reduce computational resource requirements while mitigating the risks of overfitting. However, from the perspective of convex optimization, it becomes apparent that classical boosting methods often converge to local optima rather than global optima when minimizing the target loss due to its greedy strategy. In this article, we address the issue and propose a novel optimization framework for the boosting paradigm. Our framework focuses on refining the ensemble model by further minimizing loss function through the reallocation of base learner weights, which results in a more robust and powerful learner. We have conducted experiments on various real-world and synthetic datasets, and our findings confirm that our Reweighted-Boosting model consistently outperforms its counterparts. It also exhibits an increased classification margin for the data, making it a valuable enhancement to original boosting algorithms.

Keywords

  • Adaboost
  • boosting
  • ensemble learning
  • gradient boosting
  • stochastic gradient descent

Fingerprint

Dive into the research topics of 'Reweighted-Boosting: A Gradient-Based Boosting Optimization Framework'. Together they form a unique fingerprint.

Cite this