In machine learning the random subspace method,[1] also called attribute bagging[2] or feature bagging, is an ensemble learning method that attempts to reduce the correlation between estimators in an ensemble by training them on random samples of features instead of the entire feature set.
For this reason, random subspaces are an attractive choice for high-dimensional problems where the number of features is much larger than the number of training points, such as learning from fMRI data[3] or gene expression data.
To tackle high-dimensional sparse problems, a framework named Random Subspace Ensemble (RaSE)[16] was developed.
RaSE combines weak learners trained in random subspaces with a two-layer structure and iterative process.
[17] RaSE has been shown to enjoy appealing theoretical properties and practical performance.