Stochastic control

The system designer assumes, in a Bayesian probability-driven fashion, that random noise with known probability distribution affects the evolution and observation of the state variables.

Here the model is linear, the objective function is the expected value of a quadratic form, and the disturbances are purely additive.

This property is applicable to all centralized systems with linear equations of evolution, quadratic cost function, and noise entering the model only additively; the quadratic assumption allows for the optimal control laws, which follow the certainty-equivalence property, to be linear functions of the observations of the controllers.

Any deviation from the above assumptions—a nonlinear state equation, a non-quadratic objective function, noise in the multiplicative parameters of the model, or decentralization of control—causes the certainty equivalence property not to hold.

We assume that each element of A and B is jointly independently and identically distributed through time, so the expected value operations need not be time-conditional.

Robust model predictive control is a more conservative method which considers the worst scenario in the optimization procedure.

The alternative method, SMPC, considers soft constraints which limit the risk of violation by a probabilistic inequality.

[10] The maximization, say of the expected logarithm of net worth at a terminal date T, is subject to stochastic processes on the components of wealth.

There is no certainty equivalence as in the older literature, because the coefficients of the control variables—that is, the returns received by the chosen shares of assets—are stochastic.