Lexicographic optimization

Lexicographic optimization presumes that the decision-maker prefers even a very small increase in

Similarly, the decision-maker prefers even a very small increase in

In other words, the decision-maker has lexicographic preferences, ranking the possible solutions according to a lexicographic order of their objective function values.

Lexicographic optimization is sometimes called preemptive optimization,[1] since a small increase in one objective value preempts a much larger increase in less important objective values.

As an example, consider a firm which puts safety above all.

It wants to maximize the safety of its workers and customers.

Subject to attaining the maximum possible safety, it wants to maximize profits.

This firm performs lexicographic optimization, where

As another example,[2] in project management, when analyzing PERT networks, one often wants to minimize the mean completion time, and subject to this, minimize the variance of the completion time.

A lexicographic maximization problem is often written as:

are the functions to maximize, ordered from the most to the least important;

A lexicographic minimization problem can be defined analogously.

There are several algorithms for solving lexicographic optimization problems.

[3] A leximin optimization problem with n objectives can be solved using a sequence of n single-objective optimization problems, as follows:[1][3]: Alg.1 So, in the first iteration, we find the maximum feasible value of the most important objective

In the second iteration, we find the maximum feasible value of the second-most important objective

, with the additional constraint that the most important objective must keep its maximum value of

The sequential algorithm is general - it can be applied whenever we have a solver for the single-objective functions.

Linear lexicographic optimization[2] is a special case of lexicographic optimization in which the objectives are linear, and the feasible set is described by linear inequalities.

are vectors representing the linear objectives to maximize, ordered from the most to the least important;

is the vector of decision variables; and the feasible set is determined by the matrix

Isermann[2] extended the theory of linear programming duality to lexicographic linear programs, and developed a lexicographic simplex algorithm.

In contrast to the sequential algorithm, this simplex algorithm considers all objective functions simultaneously.

Sherali and Soyster[1] prove that, for any linear lexicographic optimization problem, there exist a set of weights

such that the set of lexicographically-optimal solutions is identical to the set of solutions to the following single-objective problem:

[4] He assumes that all objective values are real numbers between 0 and 1, and the smallest difference between any two possible values is some constant d < 1 (so that values with difference smaller than d are considered equal).

This guarantees that maximizing the weighted sum

Cococcioni, Pappalardo and Sergeyev[5] show that, given a computer that can make numeric computations with infinitesimals, it is possible to choose weights that are infinitesimals (specifically:

[3]: Thm.2  Moreover, if the feasible domain is a convex set, and the objective functions are strictly concave, then the problem has at most one optimal solution, since if there were two different optimal solutions, their mean would be another feasible solution in which the objective functions attain a higher value - contradicting the optimality of the original solutions.

Then, the original lexicographic optimization problem is equivalent to the following:[3]: Thm.4

In some cases, the second problem may be easier to solve.