Their development has been motivated by descriptive complexity theory and their relationship to database query languages, in particular to Datalog.
Least fixed-point logic was first studied systematically by Yiannis N. Moschovakis in 1974,[1] and it was introduced to computer scientists in 1979, when Alfred Aho and Jeffrey Ullman suggested fixed-point logic as an expressive database query language.
[2] For a relational signature X, FO[PFP](X) is the set of formulas formed from X using first-order connectives and predicates, second-order variables as well as a partial fixed point operator
s, so with a polynomial-space counter we can check if there is a loop or not.
[5] It has been proven that on ordered finite structures, a property is expressible in FO(PFP,X) if and only if it lies in PSPACE.
[6] Since the iterated predicates involved in calculating the partial fixed point are not in general monotone, the fixed-point may not always exist.
FO(LFP,X), least fixed-point logic, is the set of formulas in FO(PFP,X) where the partial fixed point is taken only over such formulas φ that only contain positive occurrences of P (that is, occurrences preceded by an even number of negations).
This guarantees monotonicity of the fixed-point construction (That is, if the second order variable is P, then
Due to monotonicity, we only add vectors to the truth table of P, and since there are only
The Immerman-Vardi theorem, shown independently by Immerman[7] and Vardi,[8] shows that FO(LFP,X) characterises P on all ordered structures.
The expressivity of least-fixed point logic coincides exactly with the expressivity of the database querying language Datalog, showing that, on ordered structures, Datalog can express exactly those queries executable in polynomial time.
[9] Another way to ensure the monotonicity of the fixed-point construction is by only adding new tuples to
at every stage of iteration, without removing tuples for which
This inflationary fixed-point agrees with the least-fixed point where the latter is defined.
Although at first glance it seems as if inflationary fixed-point logic should be more expressive than least fixed-point logic since it supports a wider range of fixed-point arguments, in fact, every FO[IFP](X)-formula is equivalent to an FO[LFP](X)-formula.
[10] While all the fixed-point operators introduced so far iterated only on the definition of a single predicate, many computer programs are more naturally thought of as iterating over several predicates simultaneously.
By either increasing the arity of the fixed-point operators or by nesting them, every simultaneous least, inflationary or partial fixed-point can in fact be expressed using the corresponding single-iteration constructions discussed above.
[11] Rather than allow induction over arbitrary predicates, transitive closure logic allows only transitive closures to be expressed directly.
FO[TC](X) is the set of formulas formed from X using first-order connectives and predicates, second-order variables as well as a transitive closure operator
are tuples of pairwise distinct first-order variables,
TC is defined as follows: Let k be a positive integer and
is true if there exist n vectors of variables
Here, φ is a formula written in FO(TC) and
Over ordered structures, FO[TC] characterises the complexity class NL.
[12] This characterisation is a crucial part of Immerman's proof that NL is closed under complement (NL = co-NL).
Over ordered structures, FO[DTC] characterises the complexity class L.[12] The fixed-point operations that we defined so far iterate the inductive definitions of the predicates mentioned in the formula indefinitely, until a fixed point is reached.
In implementations, it may be necessary to bound the number of iterations to limit the computation time.
The resulting operators are also of interest from a theoretical point of view since they can also be used to characterise complexity classes.
the iteration operator, which is defined as Q written
to be the FO-formulae with an iteration operator whose exponent is in the class