- The Boosting Approach to Machine Learning An Overview, RE Schapire, 2001.
Schapire is one of the inventor of AdaBoost. This article starts with the pseudo code of AdaBoost, which is helpful to understand the basic procedure of boosting algorithms.
Boosting is a machine learning meta-algorithm for performing supervised learning. Boosting is based on the question posed by Kearns: can a set of weak learners create a single strong learner? (From Wikipedia)
Boosting Algorithms
Most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are typically weighted in some way that is usually related to the weak learners' accuracy. After a weak learner is added, the data is reweighted: examples that are misclassified gain weight and examples that are classified correctly lose weight (some boosting algorithms actually decrease the weight of repeatedly misclassified examples, e.g., boost by majority and BrownBoost). Thus, future weak learners focus more on the examples that previous weak learners misclassified.
AdaBoost
The pseudo code of AdaBoost is as follows
As we can see from this algorithm:
- The weight distribution over training examples changes in each iteration, and the change ratio is determined by alpha.
- The choose of alpha is not arbitrary, insteads, it is based on the error of weak learner. Reer to [1] for details.
- The aggregation of weak learners uses alpha to weight each learner.
No comments:
Post a Comment