Karush–Kuhn–Tucker conditions

Similar to the Lagrange approach, the constrained maximization (minimization) problem is rewritten as a Lagrange function whose optimal point is a global maximum or minimum over the domain of the choice variables and a global minimum (maximum) over the multipliers.

[2] Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.

Corresponding to the constrained optimization problem one can form the Lagrangian function

[5] Since the idea of this approach is to find a supporting hyperplane on the feasible set

[6] The system of equations and inequalities corresponding to the KKT conditions is usually not solved directly, except in the few special cases where a closed-form solution can be derived analytically.

In general, many optimization algorithms can be interpreted as methods for numerically solving the KKT system of equations and inequalities.

is a local optimum and the optimization problem satisfies some regularity conditions (see below), then there exist constants

(necessity) If the problem pair has strong duality, then for any solution

to satisfy the KKT conditions is equivalent to them being a Nash equilibrium.

: equilibrium is equivalent to primal feasibility and complementary slackness.

satisfies the KKT conditions, thus is a Nash equilibrium, and therefore closes the duality gap.

must close the duality gap, thus they must constitute a Nash equilibrium (since neither side could do any better), thus they satisfy the KKT conditions.

The primal problem can be interpreted as moving a particle in the space of

forces must be one-sided, pointing inwards into the feasible set for

, since the particle is not on the boundary, the one-sided constraint force cannot activate.

The necessary conditions can be written with Jacobian matrices of the constraint functions.

of the original, constrained optimization problem (assuming one exists) has to satisfy the above KKT conditions.

For the constrained case, the situation is more complicated, and one can state a variety of (increasingly complicated) "regularity" conditions under which a constrained minimizer also satisfies the KKT conditions.

Some common examples for conditions that guarantee this are tabulated in the following, with the LICQ the most frequently used one: The strict implications can be shown and In practice weaker constraint qualifications are preferred since they apply to a broader selection of problems.

For smooth functions, SOSC involve the second derivatives, which explains its name.

of a maximization problem is a differentiable concave function, the inequality constraints

of a minimization problem is a differentiable convex function, the necessary conditions are also sufficient for optimality.

[12][13] For smooth, non-linear optimization problems, a second order sufficient condition is given as follows.

found in the above section is a constrained local minimum if for the Lagrangian, then, where

Often in mathematical economics the KKT approach is used in theoretical models in order to obtain qualitative results.

For example,[14] consider a firm that maximizes its sales revenue subject to a minimum profit constraint.

The problem expressed in the previously given minimization form is and the KKT conditions are Since

is positive and so the revenue-maximizing firm operates at a level of output at which marginal revenue

— a result that is of interest because it contrasts with the behavior of a profit maximizing firm, which operates at a level at which they are equal.

This interpretation is especially important in economics and is used, for instance, in utility maximization problems.

Inequality constraint diagram for optimization problems