BIRCH (balanced iterative reducing and clustering using hierarchies) is an unsupervised data mining algorithm used to perform hierarchical clustering over particularly large data-sets.
[1] With modifications it can also be used to accelerate k-means clustering and Gaussian mixture modeling with the expectation–maximization algorithm.
[2] An advantage of BIRCH is its ability to incrementally and dynamically cluster incoming, multi-dimensional metric data points in an attempt to produce the best quality clustering for a given set of resources (memory and time constraints).
In most cases, BIRCH only requires a single scan of the database.
Its inventors claim BIRCH to be the "first clustering algorithm proposed in the database area to handle 'noise' (data points that are not part of the underlying pattern) effectively",[1] beating DBSCAN by two months.
The BIRCH algorithm received the SIGMOD 10 year test of time award in 2006.
As a result, there was a lot of overhead maintaining high clustering quality while minimizing the cost of additional IO (input/output) operations.
Furthermore, most of BIRCH's predecessors inspect all data points (or all currently existing clusters) equally for each 'clustering decision' and do not perform heuristic weighting based on the distance between these data points.
It makes full use of available memory to derive the finest possible sub-clusters while minimizing I/O costs.
It is also an incremental method that does not require the whole data set in advance.
The BIRCH algorithm takes as input a set of N data points, represented as real-valued vectors, and a desired number of clusters K. It operates in four phases, the second of which is optional.
tree, while removing outliers and grouping crowded subclusters into larger ones.
This step is marked optional in the original presentation of BIRCH.
Here an agglomerative hierarchical clustering algorithm is applied directly to the subclusters represented by their
After this step a set of clusters is obtained that captures major distribution pattern in the data.
However, there might exist minor and localized inaccuracies which can be handled by an optional step 4.
In step 4 the centroids of the clusters produced in step 3 are used as seeds and redistribute the data points to its closest seeds to obtain a new set of clusters.
In multidimensional cases the square root should be replaced with a suitable norm.
, catastrophic cancellation can occur and yield a poor precision, and which can in some cases even cause the result to be negative (and the square root then become undefined).
, and sum of squared deviations instead based on numerically more reliable online algorithms to calculate variance.
When storing a vector respectively a matrix for the squared deviations, the resulting BIRCH CF-tree can also be used to accelerate Gaussian Mixture Modeling with the expectation–maximization algorithm, besides k-means clustering and hierarchical agglomerative clustering.
online computation of the variance) that avoid the subtraction of two similar squared values.