For propositional logic, systematically applying the resolution rule acts as a decision procedure for formula unsatisfiability, solving the (complement of the) Boolean satisfiability problem.
For first-order logic, resolution can be used as the basis for a semi-algorithm for the unsatisfiability problem of first-order logic, providing a more practical method than one following from Gödel's completeness theorem.
The resolution rule can be traced back to Davis and Putnam (1960);[1] however, their algorithm required trying all ground instances of the given formula.
This source of combinatorial explosion was eliminated in 1965 by John Alan Robinson's syntactical unification algorithm, which allowed one to instantiate the formula during the proof "on demand" just as far as needed to keep refutation completeness.
[2] The clause produced by a resolution rule is sometimes called a resolvent.
The resolution rule in propositional logic is a single valid inference rule that produces a new clause implied by two clauses containing complementary literals.
is equivalent to When coupled with a complete search algorithm, the resolution rule yields a sound and complete algorithm for deciding the satisfiability of a propositional formula, and, by extension, the validity of a sentence under a set of axioms.
This resolution technique uses proof by contradiction and is based on the fact that any sentence in propositional logic can be transformed into an equivalent sentence in conjunctive normal form.
Lists, Trees and Directed Acyclic Graphs are other possible and common alternatives.
Tree representations are more faithful to the fact that the resolution rule is binary.
Together with a sequent notation for clauses, a tree representation also makes it clear to see how the resolution rule is related to a special case of the cut-rule, restricted to atomic cut-formulas.
However, tree representations are not as compact as set or list representations, because they explicitly show redundant subderivations of clauses that are used more than once in the derivation of the empty clause.
Graph representations can be as compact in the number of clauses as list representations and they also store structural information regarding which clauses were resolved to derive each resolvent.
To understand how resolution works, consider the following example syllogism of term logic: Or, more generally: To recast the reasoning using the resolution technique, first the clauses must be converted to conjunctive normal form (CNF).
Unifying the two produces the substitution Discarding the unified predicates, and applying this substitution to the remaining predicates (just Q(X), in this case), produces the conclusion: For another example, consider the syllogistic form Or more generally, In CNF, the antecedents become: (The variable in the second clause was renamed to make it clear that variables in different clauses are distinct.)
Substituting this into the remaining clauses and combining them gives the conclusion: The resolution rule, as defined by Robinson, also incorporated factoring, which unifies two literals in the same clause, before or during the application of resolution as defined above.
Using factoring, it can be obtained e.g. as follows:[7] Generalizations of the above resolution rule have been devised that do not require the originating formulas to be in clausal normal form.
[8][9][10][11][12][13] These techniques are useful mainly in interactive theorem proving where it is important to preserve human readability of intermediate result formulas.
Besides, they avoid combinatorial explosion during transformation to clause-form,[10]: 98 and sometimes save resolution steps.
[13]: 425 For propositional logic, Murray[9]: 18 and Manna and Waldinger[10]: 98 use the rule where
In order to prevent generating useless trivial resolvents, the rule shall be applied only when
Similar to Murray's approach, appropriate simplifying transformations are to be applied to the resolvent.
[12]: 395 Moreover, it does not introduce new binary junctors, thus avoiding a tendency towards clausal form in repeated resolution.
[12]: 398 As an example, starting from the user-given assumptions the Murray rule can be used as follows to infer a contradiction:[15] For the same purpose, the Traugott rule can be used as follows :[12]: 397 From a comparison of both deductions, the following issues can be seen: For first-order predicate logic, Murray's rule is generalized to allow distinct, but unifiable, subformulas
[citation needed] Traugott's rule is generalized to allow several pairwise distinct subformulas
to the parent formulas, thus making the propositional version applicable.
[16] Paramodulation is a related technique for reasoning on sets of clauses where the predicate symbol is equality.
It generates all "equal" versions of clauses, except reflexive identities.
The paramodulation operation takes a positive from clause, which must contain an equality literal.
It then searches an into clause with a subterm that unifies with one side of the equality.