In computational complexity theory, Polynomial Local Search (PLS) is a complexity class that models the difficulty of finding a locally optimal solution to an optimization problem.
The main characteristics of problems that lie in PLS are that the cost of a solution can be calculated in polynomial time and the neighborhood of a solution can be searched in polynomial time.
Therefore it is possible to verify whether or not a solution is a local optimum in polynomial time.
Furthermore, depending on the problem and the algorithm that is used for solving the problem, it might be faster to find a local optimum instead of a global optimum.
When searching for a local optimum, there are two interesting issues to deal with: First how to find a local optimum, and second how long it takes to find a local optimum.
[1] So to answer the question of how long it takes to find a local optimum, Johnson, Papadimitriou and Yannakakis [2] introduced the complexity class PLS in their paper "How easy is local search?".
The aim is to find an assignment, that maximizes the sum of the satisfied clauses.
for that instance is a bit string that assigns every
The cost of each solution is the number of satisfied clauses, so
Intuitively it can be argued that this problem lies in PLS, because: If we are simply counting the number of satisfied clauses, the problem can be solved in polynomial time since the number of possible costs is polynomial.
However, if we assign each clause a positive integer weight (and seek to locally maximize the sum of weights of satisfied clauses), the problem becomes PLS-complete (below).
of instances which are encoded using strings over a finite alphabet
there exists a finite solution set
In the implicit graph, a local optimum is a sink.
A neighborhood where every local optimum is a global optimum, which is a solution with the best possible cost, is called an exact neighborhood.
[6][1] The class PLS is the class containing all problems that can be reduced in polynomial time to the problem Sink-of-DAG[7] (also called Local-Opt [8]): Given two integers
Example neighborhood structures for problems with boolean variables (or bit strings) as solution: Example neighborhood structures for problems on graphs: Consider the following computational problem: Given some instance
Every local search problem can be solved using the following iterative improvement algorithm:[2] Unfortunately, it generally takes an exponential number of improvement steps to find a local optimum even if the problem
The run time of the standard algorithm is pseudo-polynomial in the number of different costs of a solution.
is PLS-reducible[2] to a local search problem
such that: It is sufficient to only map the local optima of
The nodes represent all elements of the finite set of solutions
and the edges point from one solution to the neighbor with strictly better cost.
A sink, which is a node with no outgoing edges, is a local optimum.
can be chosen, so that the following properties are satisfied: PLS lies between the functional versions of P and NP: FP ⊆ PLS ⊆ FNP.
[2] PLS also is a subclass of TFNP,[13] that describes computational problems in which a solution is guaranteed to exist and can be recognized in polynomial time.
For a problem in PLS, a solution is guaranteed to exist because the minimum-cost vertex of the entire graph is a valid solution, and the validity of a solution can be checked by computing its neighbors and comparing the costs of each one to another.
It is also proven that if a PLS problem is NP-hard, then NP = co-NP.
is PLS-complete,[2] if The optimization version of the circuit problem under the Flip neighborhood structure has been shown to be a first PLS-complete problem.
Notation: Problem / Neighborhood structure Fearnley, Goldberg, Hollender and Savani[24] proved that a complexity class called CLS is equal to the intersection of PPAD and PLS.