Control (optimal control theory)

In optimal control theory, a control is a variable chosen by the controller or agent to manipulate state variables, similar to an actual control valve.

Unlike the state variable, it does not have a predetermined equation of motion.

[1] The goal of optimal control theory is to find some sequence of controls (within an admissible set) to achieve an optimal path for the state variables (with respect to a loss function).

In contrast, a control that gives optimal solution during some remainder period as a function of the state variable at the beginning of the period is called a closed-loop control.

[2] This applied mathematics–related article is a stub.