Stochastic simulation

[1] Realizations of these random variables are generated and inserted into a model of the system.

Outputs of the model are recorded, and then the process is repeated with a new set of random values.

In the end, the distribution of the outputs shows the most probable estimates as well as a frame of expectations regarding what ranges of values the variables are more or less likely to fall in.

Next, the cumulative sum of the array is taken, and the final cell contains the number R, where R is the total event rate.

uniform distribution provided by a random number generator (RNG) by having

[6] A binomial distributed random variable Y with parameters n and p is obtained as the sum of n independent and identically Bernoulli-distributed random variables X1, X2, ..., Xn[4] Example: A coin is tossed three times.

[2][8] The probability distribution for Poisson processes with constant rate λ per time interval is given by the following equation.

Gillespie’s Stochastic Simulation Algorithm (SSA) is essentially an exact procedure for numerically simulating the time evolution of a well-stirred chemically reacting system by taking proper account of the randomness inherent in such a system.

[10] It is rigorously based on the same microphysical premise that underlies the chemical master equation and gives a more realistic representation of a system’s evolution than the deterministic reaction rate equation (RRE) represented mathematically by ODEs.

[10] As with the chemical master equation, the SSA converges, in the limit of large numbers of reactants, to the same solution as the law of mass action.

These methods sort the cumulative array to reduce the average search depth of the algorithm.

The former runs a presimulation to estimate the firing frequency of reactions, whereas the latter sorts the cumulative array on-the-fly.

This is a binary search on the cumulative array, thus reducing the worst-case time complexity of reaction sampling to O (log M).

Since the SSA method keeps track of each transition, it would be impractical to implement for certain applications due to high time complexity.

Gillespie proposed an approximation procedure, the tau-leaping method which decreases computational time with minimal loss of accuracy.

[13] Instead of taking incremental steps in time, keeping track of X(t) at each time step as in the SSA method, the tau-leaping method leaps from one subinterval to the next, approximating how many transitions take place during a given subinterval.

The tau-leaping method thus has the advantage of simulating many transitions in one leap while not losing significant accuracy, resulting in a speed up in computational time.

The model with the replaced transition rates can thus be solved, for instance, with the conventional SSA.

[16] Example of continuous system is the predator/prey model[17] or cart-pole balancing [18] The random variable X is said to be normally distributed with parameters μ and σ, abbreviated by X ∈ N(μ, σ2), if the density of the random variable is given by the formula [4]

The exponential distribution is popular, for example, in queuing theory when we want to model the time we have to wait until a certain event takes place.

For large values of n, the t-distribution doesn't significantly differ from a standard normal distribution.

If we necessarily need to answer all the questions, or if we don't know what purposes is the model going to be used for, it is convenient to apply combined continuous/discrete methodology.

[21] The use of this technique enables the capturing of noise due to small copy numbers, while being much faster to simulate than the conventional Gillespie algorithm.

Furthermore, the use of the deterministic continuum description enables the simulations of arbitrarily large systems.

The main idea is that if it is necessary to know the average value of some random variable and its distribution cannot be stated, and if it is possible to take samples from the distribution, we can estimate it by taking the samples, independently, and averaging them.

If there are sufficient samples, then the law of large numbers says the average must be close to the true value.

The central limit theorem says that the average has a Gaussian distribution around the true value.

[22] As a simple example, suppose we need to measure area of a shape with a complicated, irregular outline.

The problem is that the computer is highly deterministic machine—basically, behind each process there is always an algorithm, a deterministic computation changing inputs to outputs; therefore it is not easy to generate uniformly spread random numbers over a defined interval or set.

[24] Methods for obtaining random numbers have existed for a long time and are used in many different fields (such as gaming).