Hierarchical Dirichlet process

In statistics and machine learning, the hierarchical Dirichlet process (HDP) is a nonparametric Bayesian approach to clustering grouped data.

The base distribution being drawn from a Dirichlet process is important, because draws from a Dirichlet process are atomic probability measures, and the atoms will appear in all group-level Dirichlet processes.

It was developed by Yee Whye Teh, Michael I. Jordan, Matthew J. Beal and David Blei and published in the Journal of the American Statistical Association in 2006,[1] as a formalization and generalization of the infinite hidden Markov model published in 2002.

What this means is that the data items come in multiple distinct groups.

For example, in a topic model words are organized into documents, with each document formed by a bag (group) of words (data items).

, suppose each group consist of data items

that governs the a priori distribution over data items, and a number of concentration parameters that govern the a priori number of clusters and amount of sharing across groups.

th group is associated with a random probability measure

: The first line states that each parameter has a prior distribution given by

, while the second line states that each data item has a distribution

The resulting model above is called a HDP mixture model, with the HDP referring to the hierarchically linked set of Dirichlet processes, and the mixture model referring to the way the Dirichlet processes are related to the data items.

To understand how the HDP implements a clustering model, and how clusters become shared across groups, recall that draws from a Dirichlet process are atomic probability measures with probability one.

has a form which can be written as: where there are an infinite number of atoms,

is itself the base distribution for the group specific Dirichlet processes, each

, and can itself be written in the form: Thus the set of atoms is shared across all groups, with each group having its own group-specific atom masses.

Relating this representation back to the observed data, we see that each data item is described by a mixture model: where the atoms

play the role of the mixture component parameters, while the masses

In conclusion, each group of data is modeled using a mixture model, with mixture components shared across all groups but mixing proportions being group-specific.

The HDP mixture model is a natural nonparametric generalization of Latent Dirichlet allocation, where the number of topics can be unbounded and learnt from data.

[1] Here each group is a document consisting of a bag of words, each cluster is a topic, and each document is a mixture of topics.

The HDP is also a core component of the infinite hidden Markov model,[3] which is a nonparametric generalization of the hidden Markov model allowing the number of states to be unbounded and learnt from data.

[1][4] The HDP can be generalized in a number of directions.

Such an arrangement has been exploited in the sequence memoizer, a Bayesian nonparametric model for sequences which has a multi-level hierarchy of Pitman-Yor processes.

In addition, Bayesian Multi-Domain Learning (BMDL) model derives domain-dependent latent representations of overdispersed count data based on hierarchical negative binomial factorization for accurate cancer subtyping even if the number of samples for a specific cancer type is small.