BrownBoost is a boosting algorithm that may be robust to noisy datasets.
BrownBoost is an adaptive version of the boost by majority algorithm.
As is the case for all boosting algorithms, BrownBoost is used in conjunction with other machine learning methods.
BrownBoost was introduced by Yoav Freund in 2001.
[1] AdaBoost performs well on a variety of datasets; however, it can be shown that AdaBoost does not perform well on noisy data sets.
[2] This is a result of AdaBoost's focus on examples that are repeatedly misclassified.
In contrast, BrownBoost effectively "gives up" on examples that are repeatedly misclassified.
The core assumption of BrownBoost is that noisy examples will be repeatedly mislabeled by the weak hypotheses and non-noisy examples will be correctly labeled frequently enough to not be "given up on."
In turn, if the final classifier is learned from the non-noisy examples, the generalization error of the final classifier may be much better than if learned from noisy and non-noisy examples.
Thus, if the training set is noisy (say 10% of all examples are assumed to be mislabeled), the booster can be told to accept a 10% error rate.
BrownBoost uses a non-convex potential loss function, thus it does not fit into the AdaBoost framework.
The non-convex optimization provides a method to avoid overfitting noisy data sets.
However, in contrast to boosting algorithms that analytically minimize a convex loss function (e.g. AdaBoost and LogitBoost), BrownBoost solves a system of two equations and two unknowns using standard numerical methods.
The theory of BrownBoost states that each hypothesis takes a variable amount of time (
in the algorithm) which is directly related to the weight given to the hypothesis
The time parameter in BrownBoost is analogous to the number of iterations
means that BrownBoost will treat the data as if it were less noisy and therefore will give up on fewer examples.
means that BrownBoost will treat the data as more noisy and give up on more examples.
During each iteration of the algorithm, a hypothesis is selected with some advantage over random guessing.
during the iteration are simultaneously solved in a system of two non-linear equations ( 1. uncorrelated hypothesis w.r.t example weights and 2. hold the potential constant) with two unknowns (weight of hypothesis
This can be solved by bisection (as implemented in the JBoost software package) or Newton's method (as described in the original paper by Freund).
in the algorithm) and the amount of time remaining
, the variance of the loss function must decrease linearly w.r.t.
time to form the 0–1 loss function at the end of boosting iterations.
This is not yet discussed in the literature and is not in the definition of the algorithm below.
The final classifier is a linear combination of weak hypotheses and is evaluated in the same manner as most other boosting algorithms.
In preliminary experimental results with noisy datasets, BrownBoost outperformed AdaBoost's generalization error; however, LogitBoost performed as well as BrownBoost.
[4] An implementation of BrownBoost can be found in the open source software JBoost.