In computational statistics, reversible-jump Markov chain Monte Carlo is an extension to standard Markov chain Monte Carlo (MCMC) methodology, introduced by Peter Green, which allows simulation (the creation of samples) of the posterior distribution on spaces of varying dimensions.
[1] Thus, the simulation is possible even if the number of parameters in the model is not known.
The "jump" refers to the switching from one parameter space to another during the running of the chain.
RJMCMC is useful to compare models of different dimension to see which one fits the data best.
It is also useful for predictions of new data points, because we do not need to choose and fix a model, RJMCMC can directly predict the new values for all the models at the same time.
Models that suit the data best will be chosen more frequently than the poorer ones.
be a model indicator and
the parameter space whose number of dimensions
depends on the model
The model indication need not be finite.
The stationary distribution is the joint posterior distribution of
that takes the values
can be constructed with a mapping
is drawn from a random component
{\displaystyle \mathbb {R} ^{d_{mm'}}}
The move to state
can thus be formulated as The function must be one to one and differentiable, and have a non-zero support: so that there exists an inverse function that is differentiable.
must be of equal dimension, which is the case if the dimension criterion is met where
This is known as dimension matching.
then the dimensional matching condition can be reduced to with The acceptance probability will be given by where
denotes the absolute value and
is the joint posterior probability where
is the normalising constant.
There is an experimental RJ-MCMC tool available for the open source BUGs package.
The Gen probabilistic programming system automates the acceptance probability computation for user-defined reversible jump MCMC kernels as part of its Involution MCMC feature.