Leimkuhler–Matthews method

In mathematics, the Leimkuhler-Matthews method (or LM method in its original paper [1]) is an algorithm for finding discretized solutions to the Brownian dynamics where

This stochastic differential equation has solutions (denoted

in the limit of large-time, making solving these dynamics relevant in sampling-focused applications such as classical molecular dynamics and machine learning.

, the Leimkuhler-Matthews update scheme is compactly written as with initial condition

is a vector of independent normal random numbers redrawn at each step so

Despite being of equal cost to the Euler-Maruyama scheme (in terms of the number of evaluations of the function

solutions have been shown [2] to have a superconvergence property for constants

gets large we obtain an effective second order with

error in computed expectations.

For small time step

this can give significant improvements over the Euler-Maruyama scheme, at no extra cost.

The obvious method for comparison is the Euler-Maruyama scheme as it has the same cost, requiring one evaluation of

Its update is of the form with error (given some assumptions [3] ) as

Compared to the above definition, the only difference between the schemes is the one-step averaged noise term, making it simple to implement.

For sufficiently small time step

it is clear that the LM scheme gives a smaller error than Euler-Maruyama.

While there are many algorithms that can give reduced error compared to the Euler scheme (see e.g. Milstein, Runge-Kutta or Heun's method) these almost always come at an efficiency cost, requiring more computation in exchange for reducing the error.

However the Leimkuhler-Matthews scheme can give significantly reduced error with minimal change to the standard Euler scheme.

The trade-off comes from the (relatively) limited scope of the stochastic differential equation it solves:

must be a scalar constant and the drift function must be of the form

The LM scheme also is not Markovian, as updates require more than just the state at time

However, we can recast the scheme as a Markov process by extending the space.

We can rewrite the algorithm in a Markovian form by extending the state space with a momentum vector

Initializing the momentum to be a vector of

standard normal random numbers, we have where the middle step completely redraws the momentum so that each component is an independent normal random number.

The algorithm has application in any area where the weak (i.e. average) properties of solutions to Brownian dynamics are required.

This applies to any molecular simulation problem (such as classical molecular dynamics), but also can apply to statistical sampling problems due to the properties of solutions at large times.

Thus we can generate independent samples according to a required distribution by using

and running the LM algorithm until large

Such strategies can be efficient in (for instance) Bayesian inference problems.

A comparison between the Euler-Maruyama and Leimkuhler-Matthews schemes.
The distribution of solutions at time for the Euler-Maruyama method and the Leimkuhler-Matthews method, using discretization and initializing from a Normal distribution. The target distribution (in black) is the sum of two Gaussian distributions centered at and . While the Euler-Maruyama scheme results in a visible discretization error in the sampled distribution, the Leimkuhler-Matthews scheme performs significantly better for no extra cost.