An FPTAS takes as input an instance of the problem and a parameter ε > 0.
Returning a value and finding a solution with that value are equivalent assuming that the problem possesses self reducibility.
Importantly, the run-time of an FPTAS is polynomial in the problem size and in 1/ε.
This is in contrast to a general polynomial-time approximation scheme (PTAS).
The run-time of a general PTAS is polynomial in the problem size for each specific ε, but might be exponential in 1/ε.
[2] All problems in FPTAS are fixed-parameter tractable with respect to the standard parameterization.
[3] Any strongly NP-hard optimization problem with a polynomially bounded objective function cannot have an FPTAS unless P=NP.
[4] However, the converse fails: e.g. if P does not equal NP, knapsack with two constraints is not strongly NP-hard, but has no FPTAS even when the optimal objective is polynomially bounded.
[5] Woeginger[6] presented a general scheme for converting a certain class of dynamic programs to an FPTAS.
At each step i, it processes the input xi, and constructs a set of states Si.
Each state encodes a partial solution to the problem, using inputs x1,...,xi.
In general, this number can be exponential in the size of the input problem: it can be in O(n Vb), where V is the largest integer than can appear in a state.
If V is in O(X), then the run-time is in O(n Xb), which is only pseudo-polynomial time, since it is exponential in the problem size which is in O(log X).
Define: The FPTAS runs similarly to the DP, but in each step, it trims the state set into a smaller set Tk, that contains exactly one state in each r-box.
The algorithm of the FPTAS is: The run-time of the FPTAS is polynomial in the total number of possible states in each Ti, which is at most the total number of r-boxes, which is at most R, which is polynomial in n, log(X), and
Also, each Uk is a subset of the Sk in the original (untrimmed) DP.
Consider now the state s* in Sn, which corresponds to the optimal solution (that is, g(s*)=OPT).
Multiway number partitioning (equivalently, Identical-machines scheduling) with the goal of minimizing the largest sum is extremely-benevolent.
Here, we have a = 1 (the inputs are integers) and b = the number of bins (which is considered fixed).
Each state is a vector of b integers representing the sums of the b bins.
Sum of cubed job completion time on any fixed number of identical or uniform machines - the latter denoted by Qm||
Sum of weighted completion time on any fixed number of identical or uniform machines - the latter denoted by Qm||
Weighted earliness-tardiness about a common due-date on any fixed number of machines: m||
Simple dynamic programs add to the above formulation the following components: The original DP is modified as follows: A problem is called benevolent if it satisfies the following conditions (which extend conditions 1, 2, 3 above): For every benevolent problem, the dynamic program can be converted into an FPTAS similarly to the one above, with two changes (boldfaced): Here are some examples of benevolent problems, that have an FPTAS by the above theorem.
The corresponding filter functions are: h1 verifies that the weight with the next input item is at most the knapsack capacity; h2 always returns True.
The implication of this is that, if state t has a higher weight than state s, then the transition functions are allowed to not preserve the proximity between t and s (it is possible, for example, that s has a successor and t does not have a corresponding successor).
A similar algorithm was presented earlier by Ibarra and Kim.
Batch scheduling for minimizing the weighted number of tardy jobs: 1|batch|
Total weighted late work on a single machine: 1||
[11] In these cases, the number of transition functions in F is B, which is exponential in the log(X), so the second technical condition is violated.