Function problem

In computational complexity theory, a function problem is a computational problem where a single output (of a total function) is expected for every input, but the output is more complex than that of a decision problem.

For function problems, the output is not simply 'yes' or 'no'.

over strings of an arbitrary alphabet

A promise function problem is allowed to do anything (thus may not terminate) if no such

A well-known function problem is given by the Functional Boolean Satisfiability Problem, FSAT for short.

The problem, which is closely related to the SAT decision problem, can be formulated as follows: In this case the relation

is given by tuples of suitably encoded boolean formulas and satisfying assignments.

While a SAT algorithm, fed with a formula

, only needs to return "unsatisfiable" or "satisfiable", an FSAT algorithm needs to return some satisfying assignment in the latter case.

Other notable examples include the travelling salesman problem, which asks for the route taken by the salesman, and the integer factorization problem, which asks for the list of factors.

Consider an arbitrary decision problem

By the definition of NP, each problem instance

which serves as a proof for the 'yes' answer.

forms a relation, representing the function problem "given

FNP can be thought of as the function class analogue of NP, in that solutions of FNP problems can be efficiently (i.e., in polynomial time in terms of the length of the input) verified, but not necessarily efficiently found.

In contrast, the class FP, which can be thought of as the function class analogue of P, consists of function problems whose solutions can be found in polynomial time.

Observe that the problem FSAT introduced above can be solved using only polynomially many calls to a subroutine which decides the SAT problem: An algorithm can first ask whether the formula

If the resulting formula is still satisfiable the algorithm keeps

Thus, FSAT is solvable in polynomial time using an oracle deciding SAT.

In general, a problem in NP is called self-reducible if its function variant can be solved in polynomial time using an oracle deciding the original problem.

Every NP-complete problem is self-reducible.

that the integer factorization problem is not self-reducible, because deciding whether an integer is prime is in P (easy),[1] while the integer factorization problem is believed to be hard for a classical computer.

There are several (slightly different) notions of self-reducibility.

if there exists polynomially-time computable functions

is FNP-complete if every problem in FNP can be reduced to

The complexity class of FNP-complete problems is denoted by FNP-C or FNPC.

used to define function problems has the drawback of being incomplete: Not every input

To overcome this problem it is convenient to consider the restriction of function problems to total relations yielding the class TFNP as a subclass of FNP.

This class contains problems such as the computation of pure Nash equilibria in certain strategic games where a solution is guaranteed to exist.

In addition, if TFNP contains any FNP-complete problem it follows that