Reduction strategy

[1] Some authors use the term to refer to an evaluation strategy.

[2][3] Formally, for an abstract rewriting system

[5] In a term rewriting system a rewriting strategy specifies, out of all the reducible subterms (redexes), which one should be reduced (contracted) within a term.

One-step strategies for term rewriting include:[5] Many-step strategies include:[5] Parallel outermost and Gross-Knuth reduction are hypernormalizing for all almost-orthogonal term rewriting systems, meaning that these strategies will eventually reach a normal form if it exists, even when performing (finitely many) arbitrary reductions between successive applications of the strategy.

[8] Stratego is a domain-specific language designed specifically for programming term rewriting strategies.

[11][12] Leftmost reduction is sometimes used to refer to normal order reduction, as with a pre-order traversal the notions coincide, and similarly the leftmost-outermost redex is the redex with leftmost starting character when the lambda term is considered as a string of characters.

[13][14] When "leftmost" is defined using an in-order traversal the notions are distinct.

defined here, the leftmost redex of the in-order traversal is

[10] In contrast to normal order, applicative order reduction may not terminate, even when the term has a normal form.

But using normal-order reduction, the same starting point reduces quickly to normal form:

Full β-reduction refers to the nondeterministic one-step strategy that allows reducing any redex at each step.

[3] Takahashi's parallel β-reduction is the strategy that reduces all redexes in the term simultaneously.

In contrast, weak reduction does not reduce under a lambda abstraction.

[17] Call-by-name reduction is the weak reduction strategy that reduces the leftmost outermost redex not inside a lambda abstraction, while call-by-value reduction is the weak reduction strategy that reduces the leftmost innermost redex not inside a lambda abstraction.

[18] In fact, applicative order reduction was also originally introduced to model the call-by-value parameter passing technique found in Algol 60 and modern programming languages.

[19] Unfortunately, weak reduction is not confluent,[17] and the traditional reduction equations of the lambda calculus are useless, because they suggest relationships that violate the weak evaluation regime.

[19] However, it is possible to extend the system to be confluent by allowing a restricted form of reduction under an abstraction, in particular when the redex does not involve the variable bound by the abstraction.

(λy.x)z is in normal form for a weak reduction strategy because the redex (λy.x)z is contained in a lambda abstraction.

(λy.y)z can still be reduced under the extended weak reduction strategy, because the redex (λy.y)z does not refer to x.

[20] Optimal reduction is motivated by the existence of lambda terms where there does not exist a sequence of reductions which reduces them without duplicating work.

For example, consider It is composed of three nested terms, x=((λg.

Reducing the outer x term first results in the inner y term being duplicated, and each copy will have to be reduced, but reducing the inner y term first will duplicate its argument z, which will cause work to be duplicated when the values of h and w are made known.

[a] Optimal reduction is not a reduction strategy for the lambda calculus in a narrow sense because performing β-reduction loses the information about the substituted redexes being shared.

Instead it is defined for the labelled lambda calculus, an annotated lambda calculus which captures a precise notion of the work that should be shared.

[23] The strategy is optimal in the sense that it performs the optimal (minimal) number of family reduction steps.

[26] The Bologna Optimal Higher-order Machine (BOHM) is a prototype implementation of an extension of the technique to interaction nets.

[21]: 362 [27] Lambdascope is a more recent implementation of optimal reduction, also using interaction nets.

[17] An alternate definition changes the beta rule to an operation that finds the next "needed" computation, evaluates it, and substitutes the result into all locations.

This requires extending the beta rule to allow reducing terms that are not syntactically adjacent.

[29] As with call-by-name and call-by-value, call-by-need reduction was devised to mimic the behavior of the evaluation strategy known as "call-by-need" or lazy evaluation.