Constrained optimization

Constraints can be either hard constraints, which set conditions for the variables that are required to be satisfied, or soft constraints, which have some variable values that are penalized in the objective function if, and based on the extent that, the conditions on the variables are not satisfied.

[1] COP is a CSP that includes an objective function to be optimized.

A general constrained minimization problem may be written as follows:[2] where

is the objective function that needs to be optimized subject to the constraints.

In some problems, often called constraint optimization problems, the objective function is actually the sum of cost functions, each of which penalizes the extent (if any) to which a soft constraint (a constraint which is preferred but not required to be satisfied) is violated.

Many constrained optimization algorithms can be adapted to the unconstrained case, often via the use of a penalty method.

However, search steps taken by the unconstrained method may be unacceptable for the constrained problem, leading to a lack of convergence.

[3] For very simple problems, say a function of two variables subject to a single equality constraint, it is most practical to apply the method of substitution.

[4] The idea is to substitute the constraint into the objective function to create a composite function that incorporates the effect of the constraint.

If the constrained problem has only equality constraints, the method of Lagrange multipliers can be used to convert it into an unconstrained problem whose number of variables is the original number of variables plus the original number of equality constraints.

Alternatively, if the constraints are all equality constraints and are all linear, they can be solved for some of the variables in terms of the others, and the former can be substituted out of the objective function, leaving an unconstrained problem in a smaller number of variables.

If the objective function and all of the hard constraints are linear and some hard constraints are inequalities, then the problem is a linear programming problem.

This can be solved by the simplex method, which usually works in polynomial time in the problem size but is not guaranteed to, or by interior point methods which are guaranteed to work in polynomial time.

If all the hard constraints are linear and some are inequalities, but the objective function is quadratic, the problem is a quadratic programming problem.

It can still be solved in polynomial time by the ellipsoid method if the objective function is convex; otherwise the problem may be NP hard.

Allowing inequality constraints, the KKT approach to nonlinear programming generalizes the method of Lagrange multipliers.

Constraint optimization can be solved by branch-and-bound algorithms.

These are backtracking algorithms storing the cost of the best solution found during execution and using it to avoid part of the search.

Assuming that cost is to be minimized, the efficiency of these algorithms depends on how the cost that can be obtained from extending a partial solution is evaluated.

Indeed, if the algorithm can backtrack from a partial solution, part of the search is skipped.

On the other hand, this estimated cost cannot be lower than the effective cost that can be obtained by extending the solution, as otherwise the algorithm could backtrack while a solution better than the best found so far exists.

As a result, the algorithm requires an upper bound on the cost that can be obtained from extending a partial solution, and this upper bound should be as small as possible.

One way for evaluating this upper bound for a partial solution is to consider each soft constraint separately.

For each soft constraint, the maximal possible value for any assignment to the unassigned variables is assumed.

Each such problem is the subproblem obtained by dropping a sequence of variables

More precisely, the cost of soft constraints containing both assigned and unassigned variables is estimated as above (or using an arbitrary other method); the cost of soft constraints containing only unassigned variables is instead estimated using the optimal solution of the corresponding problem, which is already known at this point.

There is similarity between the Russian Doll Search method and dynamic programming.

But, whereas Dynamic Programming directly combines the results obtained on sub-problems to get the result of the whole problem, Russian Doll Search only uses them as bounds during its search.

The bucket elimination algorithm can be adapted for constraint optimization.

, the new soft constraint is defined by: Bucket elimination works with an (arbitrary) ordering of the variables.

Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.