The basic idea of boosting as functional gradient descent and stages/steps as trees, known by gradient boosting, is presented by a Stanford paper:
- Jerome Friedman. Greedy Function Approximation: A Gradient Boosting Machine. The Annuals of Statistics. 2001
The same author wrote a note on extending gradient boosting into its stochastic version, stochastic gradient boosting:
- Jerome Friedman. Stochastic Gradient Boosting. 1999.
The (stochastic) gradient boosting use regression/classification trees as base learners, and needs to learn trees in the procedure of training. If you are interesting with distributed learning of trees using MapReduce, you might want to refer to a recent Google paper:
- PLANET: Massively Parallel Learning of Tree Ensembles with MapReduce. VLDB 2009
A recent Yahoo paper shows implementing stochastic boosted decision trees using MPI and Hadoop:
- Stochastic Gradient Boosted Distributed Decision Trees. CIKM 2009
No comments:
Post a Comment