Natural deduction

[1] This contrasts with Hilbert-style systems, which instead use axioms as much as possible to express the logical laws of deductive reasoning.

Natural deduction in its modern form was independently proposed by the German mathematician Gerhard Gentzen in 1933, in a dissertation delivered to the faculty of mathematical sciences of the University of Göttingen.

[3] The term natural deduction (or rather, its German equivalent natürliches Schließen) was coined in that paper: Ich wollte nun zunächst einmal einen Formalismus aufstellen, der dem wirklichen Schließen möglichst nahe kommt.

For this reason he introduced his alternative system, the sequent calculus, for which he proved the Hauptsatz both for classical and intuitionistic logic.

In natural deduction, a proposition is deduced from a collection of premises by applying inference rules repeatedly.

The system presented in this article is a minor variation of Gentzen's or Prawitz's formulation, but with a closer adherence to Martin-Löf's description of logical judgments and connectives.

[6] Natural deduction has had a large variety of notation styles,[7] which can make it difficult to recognize a proof if you're not familiar with one of them.

This section just explains the historical evolution of notation styles, most of which cannot be shown because there are no illustrations available under a public copyright license – the reader is pointed to the SEP and IEP for pictures.

Many textbooks use Suppes–Lemmon notation,[7] so this article will also give that – although as of now, this is only included for propositional logic, and the rest of the coverage is given only in Gentzen style.

[17] The annotation specifies which rule of proof was applied, and to which earlier lines, to yield the current sentence.

[17] Disjunctive Syllogism can be used as an easier alternative to the proper ∨-elimination,[17] and MTT and DN are commonly given rules,[22] although they are not primitive.

However, there are local notions of consistency and completeness that are purely syntactic checks on the inference rules, and require no appeals to models.

Again for conjunctions: These notions correspond exactly to β-reduction (beta reduction) and η-conversion (eta conversion) in the lambda calculus, using the Curry–Howard isomorphism.

The third rule of formation effectively defines an atomic formula, as in first-order logic, and again in model theory.

The quantifiers have as the domain of quantification the very same sort of propositions, as reflected in the formation rules: A discussion of the introduction and elimination forms for higher-order logic is beyond the scope of this article.

For implication, the introduction form localises or binds the hypothesis, written using a λ; this corresponds to the discharged label.

The logical connectives are also given a different reading: conjunction is viewed as product (×), implication as the function arrow (→), etc.

Normalisability is a rare feature of most non-trivial type theories, which is a big departure from the logical world.

Various combinations of dependency and polymorphism have been considered in the literature, the most famous being the lambda cube of Henk Barendregt.

Popular modern logical frameworks such as the calculus of constructions and LF are based on higher-order dependent type theory, with various trade-offs in terms of decidability and expressive power.

Gentzen's original treatment of excluded middle prescribed one of the following three (equivalent) formulations, which were already present in analogous forms in the systems of Hilbert and Heyting: (XM3 is merely XM2 expressed in terms of E.) This treatment of excluded middle, in addition to being objectionable from a purist's standpoint, introduces additional complications in the definition of normal forms.

These were first described, for the alethic modal logics S4 and S5, in a natural deduction style by Prawitz in 1965,[5] and have since accumulated a large body of related work.

The introduction and elimination forms are then: The modal hypotheses have their own version of the hypothesis rule and substitution theorem.

Labels also allow the naming of worlds in Kripke semantics; Simpson (1993) presents an influential technique for converting frame conditions of modal logics in Kripke semantics into inference rules in a natural deduction formalisation of hybrid logic.

The sequent calculus is the chief alternative to natural deduction as a foundation of mathematical logic.

To address this fact, Gentzen in 1935 proposed his sequent calculus, though he initially intended it as a technical device for clarifying the consistency of predicate logic.

Kleene, in his seminal 1952 book Introduction to Metamathematics, gave the first formulation of the sequent calculus in the modern style.

These initial rules are superficially similar to the hypothesis rule of natural deduction, but in the sequent calculus they describe a transposition or a handshake of a left and a right proposition: The correspondence between the sequent calculus and natural deduction is a pair of soundness and completeness theorems, which are both provable by means of an inductive argument.

It is clear by these theorems that the sequent calculus does not change the notion of truth, because the same collection of propositions remain true.

Proof theorists often prefer to work on cut-free sequent calculus formulations because of such properties.

Summary of first-order system