Lexmaxmin optimization presumes that the decision-maker would like the smallest objective value to be as high as possible; subject to this, the second-smallest objective should be as high as possible; and so on.
In other words, the decision-maker ranks the possible solutions according to a leximin order of their objective function values.
As an example, consider egalitarian social planners, who want to decide on a policy such that the utility of the poorest person will be as high as possible; subject to this, they want to maximize the utility of the second-poorest person; and so on.
This planner solves a lexmaxmin problem, where the objective function number i is the utility of agent number i. Algorithms for lexmaxmin optimization (not using this name) were developed for computing the nucleolus of a cooperative game.
[1][2] An early application of lexmaxmin was presented by Melvin Dresher[3] in his book on game theory, in the context of taking maximum advantage of the opponent's mistakes in a zero-sum game.
However, in lexicographic optimization, there is a fixed order on the functions, such that
To present lexmaxmin as a special case of lexicographic optimization, denote by
However, if the feasible domain is a convex set, and the objectives are concave functions, then the value vectors in all optimal solutions must be the same, since if there were two different optimal solutions, their mean would be another feasible solution in which the objective functions attain a higher value - contradicting the optimality of the original solutions.
The earliest appearance is attributed to Alexander Kopelowitz[1] by Elkind and Pasechnik.
[7]: 20–27 [8][9][5]: Alg.2 [10][11][12][13][14][15] The algorithm keeps a set of objectives that are considered saturated (also called: blocking).
This means that their value cannot be improved without harming lower-valued objectives.
In general, the algorithm works as follows: It remains to explain how we can find new saturated objectives in each iteration.
The set of tight constraints in an interior optimizer is unique.
Proof: Suppose by contradiction that there are two interior-optimizers, x1 and x2, with different sets of tight constraints.
Since the feasible set is convex, the average solution x3 = (x1+x2)/2 is also an optimizer.
Therefore, the number of tight constraints in x3 is smaller than in x1 and x2, contradicting the definition of an interior optimizer.
Therefore, after at most n iterations, all variables are saturated and a leximin-optimal solution is found.
Instead of finding all saturated objectives, we can break out of the inner loop after finding one saturated objective; the algorithm still stops after at most n iterations, and may reduce the number of linear programs (P2) we need to solve.
In some cases, the dual variables are given as a byproduct of solving (P1), for example, when the objectives and constraints are linear and the solver is the simplex algorithm.
For non-convex problems, there might be no saturated objective, so the algorithm might not stop.
The Ordered Outcomes Algorithm works in arbitrary domains (not necessarily convex).
Lexicographic optimization can be done with a simple sequential algorithm, which solves at most n linear programs.
This problem can be solved iteratively using lexicographic optimization, but the number of constraints in each iteration t is C(n,t) -- the number of subsets of size t. This grows exponentially with n. It is possible to reduce the problem to a different problem, in which the number of constraints is polynomial in n. For every t, the sum
Let us compute the values of the auxiliary variables in the optimal solution.
One advantage of the Ordered Outcomes Algorithm is that it can be used even when the single-problem solver is inaccurate, and returns only approximate solutions.
Specifically, if the single-problem solver approximates the optimal single-problem solution with multiplicative factor α ∈ (0,1] and additive factor ϵ ≥ 0, then the algorithm returns a solution that approximates the leximin-optimal solution with multiplicative factor α2/(1 − α + α2) and additive factor ϵ/(1 − α + α2).
Behringer[4] presented a sequential algorithm for lexmaxmin optimization[clarification needed] when the objectives are quasiconvex functions, and the feasible set X is a convex set.
If the set of vectors is discrete, and the domain is sufficiently small, then it is possible to use one of the functions representing the leximin order, and maximize it subject to the constraints, using a solver for constraint-satisfaction problems.
[21] Bouveret and Lemaître present five different algorithms for finding leximin-optimal solutions to discrete constraint-satisfaction problems:[21] In their experiments, the best-performing approach was 4 (ATLEAST), followed by 3 (SORT) followed by 1 (LEXIMIN).
Dall'aglio[24] presents an algorithm for computing a leximin-optimal resource allocation.