It is based on the LR parsing technique, which stands for "left-to-right, rightmost derivation in reverse."
This parser has the potential of recognizing all deterministic context-free languages and can produce both left and right derivations of statements encountered in the input file.
[1] Canonical LR(1) parsers have the practical disadvantage of having enormous memory requirements for their internal parser-table representation.
In 1969, Frank DeRemer suggested two simplified versions of the LR parser called LALR and SLR.
The LR(1) parser is a deterministic automaton and as such its operation is based on static state transition tables.
These codify the grammar of the language it recognizes and are typically called "parsing tables".
This allows for richer languages where a simple rule can have different meanings depending on the lookahead context.
For example, in a LR(1) grammar, all of the following rules perform a different reduction in spite of being based on the same state sequence.
That means in the following example the sequence can be reduced to instead of if the lookahead after the parser went to state B wasn't acceptable, i.e. no transition rule existed.
That is the reason why LR(1) parsers cannot be practically implemented without significant memory optimizations.
This means, contrary to LR(0) parsers, a different action may be executed, if the item to process is followed by a different terminal.
In plain words, an item set is the list of production rules, which the currently processed symbol might be part of.
The lookahead of an LR(1) item is used directly only when considering reduce actions (i.e., when the • marker is at the right end).
Note that a LR(0) parser would not be able to make this decision, as it only considers the core of the items, and would thus report a shift/reduce conflict.