Control-Lyapunov function

The ordinary Lyapunov function is used to test whether a dynamical system is (Lyapunov) stable or (more restrictively) asymptotically stable.

Lyapunov stability means that if the system starts in a state

in some domain D, then the state will remain in D for all time.

For asymptotic stability, the state is also required to converge to

A control-Lyapunov function is used to test whether a system is asymptotically stabilizable, that is whether for any state x there exists a control

such that the system can be brought to the zero state asymptotically by applying the control u.

The theory and application of control-Lyapunov functions were developed by Zvi Artstein and Eduardo D. Sontag in the 1980s and 1990s.

Consider an autonomous dynamical system with inputs where

Suppose our goal is to drive the system to an equilibrium

Without loss of generality, suppose the equilibrium is at

The last condition is the key condition; in words it says that for each state x we can find a control u that will reduce the "energy" V. Intuitively, if in each state we can always find a way to reduce the energy, we should eventually be able to bring the energy asymptotically to zero, that is to bring the system to a stop.

This is made rigorous by Artstein's theorem.

Some results apply only to control-affine systems—i.e., control systems in the following form: where

Eduardo Sontag showed that for a given control system, there exists a continuous CLF if and only if the origin is asymptotic stabilizable.

[5] It was later shown by Francis H. Clarke, Yuri Ledyaev, Eduardo Sontag, and A.I.

Subbotin that every asymptotically controllable system can be stabilized by a (generally discontinuous) feedback.

[6] Artstein proved that the dynamical system (2) has a differentiable control-Lyapunov function if and only if there exists a regular stabilizing feedback u(x).

It is often difficult to find a control-Lyapunov function for a given system, but if one is found, then the feedback stabilization problem simplifies considerably.

For the control affine system (2), Sontag's formula (or Sontag's universal formula) gives the feedback law

directly in terms of the derivatives of the CLF.[4]: Eq.

5.56  In the special case of a single input system

For the general nonlinear system (1), the input

can be found by solving a static non-linear programming problem for each state x.

Here is a characteristic example of applying a Lyapunov candidate function to a control problem.

Consider the non-linear system, which is a mass-spring-damper system with spring hardening and position dependent mass described by Now given the desired state,

The goal is to get the time derivative to be which is globally exponentially stable if

, to fulfill the requirement which upon substitution of the dynamics,

, both greater than zero, as tunable parameters This control law will guarantee global exponential stability since upon substitution into the time derivative yields, as expected which is a linear first order differential equation which has solution And hence the error and error rate, remembering that

If you wish to tune a particular response from this, it is necessary to substitute back into the solution we derived for

This is left as an exercise for the reader but the first few steps at the solution are: which can then be solved using any linear differential equation methods.