Linear logic

Proof-theoretically, it derives from an analysis of classical sequent calculus in which uses of (the structural rules) contraction and weakening are carefully controlled.

Operationally, this means that logical deduction is no longer merely about an ever-expanding collection of persistent "truths", but also a way of manipulating resources that cannot always be duplicated or thrown away at will.

We use the letters Γ and Δ to range over lists of propositions A1, ..., An, also called contexts.

Girard describes classical linear logic using only one-sided sequents (where the left-hand context is empty), and we follow here that more economical presentation.

[4] First, to formalize the fact that we do not care about the order of propositions inside a context, we add the structural rule of exchange: Note that we do not add the structural rules of weakening and contraction, because we do care about the absence of propositions in a sequent, and the number of copies present.

Ultimately, this canonical form property (which can be divided into the completeness of atomic initial sequents and the cut-elimination theorem, inducing a notion of analytic proof) lies behind the applications of linear logic in computer science, since it allows the logic to be used in proof search and as a resource-aware lambda-calculus.

In a one-sided presentation, one instead makes use of negation: the right-rules for a connective (say ⅋) effectively play the role of left-rules for its dual (⊗).

But now we can explain the basis for the multiplicative/additive distinction in the rules for the two different versions of conjunction: for the multiplicative connective (⊗), the context of the conclusion (Γ, Δ) is split up between the premises, whereas for the additive case connective (&) the context of the conclusion (Γ) is carried whole into both premises.

In addition to the De Morgan dualities described above, some important equivalences in linear logic include: By definition of A ⊸ B as A⊥ ⅋ B, the last two distributivity laws also give: (Here A ≣ B is (A ⊸ B) & (B ⊸ A).)

[7] The idea is that exponentials allow us to use a formula as many times as we need, which is always possible in classical and intuitionistic logic.

However, the rejection of weakening and contraction allows linear logic to avoid this kind of spurious reasoning even with the "naive" rule.

Running with the example of the vending machine, consider the "resource interpretations" of the other multiplicative and additive connectives.

(The exponentials provide the means to combine this resource interpretation with the usual notion of persistent logical truth.)

Additive conjunction (A & B) represents alternative occurrence of resources, the choice of which the consumer controls.

If in the vending machine there is a packet of chips, a candy bar, and a can of soft drink, each costing one dollar, then for that price you can buy exactly one of these products.

We do not write $1 ⊸ (candy ⊗ chips ⊗ drink), which would imply that one dollar suffices for buying all three products together.

Additive disjunction (A ⊕ B) represents alternative occurrence of resources, the choice of which the machine controls.

[8] When considering fragments of CLL, the decision problem has varying complexity: Many variations of linear logic arise by further tinkering with the structural rules: Different intuitionistic variants of linear logic have been considered.

When based on a single-conclusion sequent calculus presentation, like in ILL (Intuitionistic Linear Logic), the connectives ⅋, ⊥, and ?