Danskin's theorem

In convex analysis, Danskin's theorem is a theorem which provides information about the derivatives of a function of the form

The theorem has applications in optimization, where it sometimes is used to solve minimax problems.

The original theorem given by J. M. Danskin in his 1967 monograph [1] provides a formula for the directional derivative of the maximum of a (not necessarily convex) directionally differentiable function.

An extension to more general conditions was proven 1971 by Dimitri Bertsekas.

The following version is proven in "Nonlinear programming" (1991).

[2] Suppose

is a continuous function of two arguments,

is a compact set.

Under these conditions, Danskin's theorem provides conclusions regarding the convexity and differentiability of the function

To state these results, we define the set of maximizing points

Danskin's theorem then provides the following results.

In the statement of Danskin, it is important to conclude semi-differentiability of

and not directional-derivative as explains this simple example.

Set

but has not a directional derivative at

The 1971 Ph.D. Thesis by Dimitri P. Bertsekas (Proposition A.22) [3] proves a more general result, which does not require that

is an extended real-valued closed proper convex function for each

in the compact set

int ⁡ ( dom ⁡ ( f ) ) ,

{\displaystyle \operatorname {int} (\operatorname {dom} (f)),}

the interior of the effective domain of

is continuous on the set

int ⁡ ( dom ⁡ ( f ) ) ×

{\displaystyle \operatorname {int} (\operatorname {dom} (f))\times Z.}

int ⁡ ( dom ⁡ ( f ) ) ,

{\displaystyle \operatorname {int} (\operatorname {dom} (f)),}

∂ f ( x ) = conv ⁡