A syntactic predicate specifies the syntactic validity of applying a production in a formal grammar and is analogous to a semantic predicate that specifies the semantic validity of applying a production.
It is a simple and effective means of dramatically improving the recognition strength of an LL parser by providing arbitrary lookahead.
In their original implementation, syntactic predicates had the form “( α )?” and could only appear on the left edge of a production.
The required syntactic condition α could be any valid context-free grammar fragment.
In this sense, the term predicate has the meaning of a mathematical indicator function.
Moreover, Ford invented packrat parsing to handle these grammars in linear time by employing memoization, at the cost of heap space.
It is possible to support linear-time parsing of predicates as general as those allowed by PEGs, but reduce the memory cost associated with memoization by avoiding backtracking where some more efficient implementation of lookahead suffices.
This approach is implemented by ANTLR version 3, which uses Deterministic finite automata for lookahead; this may require testing a predicate in order to choose between transitions of the DFA (called "pred-LL(*)" parsing).
Formalisms that vary over time (such as adaptive grammars) may rely on these side effects.
Parr & Quong[5] give this example of a syntactic predicate: which is intended to satisfy the following informally stated[6] constraints of C++: In the first production of rule stat, the syntactic predicate (declaration)?
indicates that declaration is the syntactic context that must be present for the rest of that production to succeed.
Of note in the above example is the fact that any code triggered by the acceptance of the declaration production will only occur if the predicate is satisfied.
[7]): Although by no means an exhaustive list, the following parsers and grammar formalisms employ syntactic predicates: