Metadynamics

It was first suggested by Alessandro Laio and Michele Parrinello in 2002[1] and is usually applied within molecular dynamics simulations.

MTD closely resembles a number of newer methods such as adaptively biased molecular dynamics,[2] adaptive reaction coordinate forces[3] and local elevation umbrella sampling.

[4] More recently, both the original and well-tempered metadynamics[5] were derived in the context of importance sampling and shown to be a special case of the adaptive biasing potential setting.

[7] The technique builds on a large number of related methods including (in a chronological order) the deflation,[8] tunneling,[9] tabu search,[10] local elevation,[11] conformational flooding,[12] Engkvist-Karlström[13] and Adaptive Biasing Force methods.

[14] Metadynamics has been informally described as "filling the free energy wells with computational sand".

During the evolution of the simulation, more and more Gaussians sum up, thus discouraging more and more the system to go back to its previous steps, until the system explores the full energy landscape—at this point the modified free energy becomes a constant as a function of the collective variables which is the reason for the collective variables to start fluctuating heavily.

At this point the energy landscape can be recovered as the opposite of the sum of all Gaussians.

[16] Metadynamics has the advantage, upon methods like adaptive umbrella sampling, of not requiring an initial estimate of the energy landscape to explore.

Typically, it requires several trials to find a good set of collective variables, but there are several automatic procedures proposed: essential coordinates,[17] Sketch-Map,[18] and non-linear data-driven collective variables.

[19] Independent metadynamics simulations (replicas) can be coupled together to improve usability and parallel performance.

[23] The last three are similar to the parallel tempering method and use replica exchanges to improve sampling.

This limitation comes from the bias potential, constructed by adding Gaussian functions (kernels).

So MTD simulation length has to increase exponentially with the number of CVs to maintain the same accuracy of the bias potential.

Also, the bias potential, for fast evaluation, is typically approximated with a regular grid.

[27] The required memory to store the grid increases exponentially with the number of dimensions (CVs) too.

[28] It is based on two machine learning algorithms: the nearest-neighbor density estimator (NNDE) and the artificial neural network (ANN).

ANN is a memory-efficient representation of high-dimensional functions, where derivatives (biasing forces) are effectively computed with the backpropagation algorithm.

[30] In 2015, White, Dama, and Voth introduced experiment-directed metadynamics, a method that allows for shaping molecular dynamics simulations to match a desired free energy surface.

This technique guides the simulation towards conformations that align with experimental data, enhancing our understanding of complex molecular systems and their behavior.

[37] The OPES method has only a few robust parameters, converges faster than metadynamics, and has a straightforward reweighting scheme.

[38] In 2024, a replica-exchange variant of OPES was developed, named OneOPES,[39] designed to exploit a thermal gradient and multiple CVs to sample large biochemical systems with several degrees of freedom.

This variant aims to address the challenge of describing such systems, where the numerous degrees of freedom are often difficult to capture with only a few CVs.

The potential function form (e.g. two local minima separated by a high-energy barrier) prevents an ergodic sampling with molecular dynamics or Monte Carlo methods.

, the accumulated bias potential converges to free energy with opposite sign (and irrelevant constant

The bias potential becomes a sum of the kernel functions centred at the instantaneous collective variable values

: Typically, the kernel is a multi-dimensional Gaussian function, whose covariance matrix has diagonal non-zero elements only: The parameter

The finite size of the kernel makes the bias potential to fluctuate around a mean value.

A converged free energy can be obtained by averaging the bias potential.

It has a flexible object-oriented design[48][49] and can be interfaced with several MD programs (AMBER, GROMACS, LAMMPS, NAMD, Quantum ESPRESSO, DL_POLY_4, CP2K, and OpenMM).

[50][51] Other MTD implementations exist in the Collective Variables Module [52] (for LAMMPS, NAMD, and GROMACS), ORAC, CP2K,[53] EDM,[54] and Desmond.