Out-of-bag (OOB) error, also called out-of-bag estimate, is a method of measuring the prediction error of random forests, boosted decision trees, and other machine learning models utilizing bootstrap aggregating (bagging).
When this process is repeated, such as when building a random forest, many bootstrap samples and OOB sets are created.
To ensure an accurate model, the bootstrap training sample size should be close to that of the original set.
[2] Also, the number of iterations (trees) of the model (forest) should be considered to find the true OOB error.
Out-of-bag error is used frequently for error estimation within random forests but with the conclusion of a study done by Silke Janitza and Roman Hornung, out-of-bag error has shown to overestimate in settings that include an equal number of observations from all response classes (balanced samples), small sample sizes, a large number of predictor variables, small correlation between predictors, and weak effects.