Operational semantics is a category of formal programming language semantics in which certain desired properties of a program, such as correctness, safety or security, are verified by constructing proofs from logical statements about its execution and procedures, rather than by attaching mathematical meanings to its terms (denotational semantics).
The following statement is a quote from the revised ALGOL 68 report: The meaning of a program in the strict language is explained in terms of a hypothetical computer which performs the set of actions that constitute the elaboration of that program.
(Algol68, Section 2) The first use of the term "operational semantics" in its present meaning is attributed to Dana Scott (Plotkin04).
It is all very well to aim for a more ‘abstract’ and a ‘cleaner’ approach to semantics, but if the plan is to be any good, the operational aspects cannot be completely ignored.
The basic idea behind SOS is to define the behavior of a program in terms of the behavior of its parts, thus providing a structural, i.e., syntax-oriented and inductive, view on operational semantics.
An SOS specification defines the behavior of a program in terms of a (set of) transition relation(s).
SOS specifications take the form of a set of inference rules that define the valid transitions of a composite piece of syntax in terms of the transitions of its components.
range over states (e.g. functions from memory locations to values).
Important relations include simulation preorders and bisimulation.
Thanks to its intuitive look and easy-to-follow structure, SOS has gained great popularity and has become a de facto standard in defining operational semantics.
Its key ideas were first applied to purely functional call by name and call by value variants of the lambda calculus by Gordon Plotkin in 1975[3] and generalized to higher-order functional languages with imperative features by Matthias Felleisen in his 1987 dissertation.
[4] The method was further elaborated by Matthias Felleisen and Robert Hieb in 1992 into a fully equational theory for control and state.
[2] The phrase “reduction semantics” itself was first coined by Felleisen and Daniel P. Friedman in a PARLE 1987 paper.
For example, the following reduction rule states that an assignment statement can be reduced if it sits immediately beside its variable declaration:
expressions may declare distinct variables, the calculus also demands an extrusion rule for
Most published uses of reduction semantics define such “bubble rules” with the convenience of evaluation contexts.
For example, the grammar of evaluation contexts in a simple call by value language can be given as
To describe “bubbling” with the aid of evaluation contexts, a single axiom suffices:
Following Plotkin, showing the usefulness of a calculus derived from a set of reduction rules demands (1) a Church-Rosser lemma for the single-step relation, which induces an evaluation function, and (2) a Curry-Feys standardization lemma for the transitive-reflexive closure of the single-step relation, which replaces the non-deterministic search in the evaluation function with a deterministic left-most/outermost search.
Felleisen showed that imperative extensions of this calculus satisfy these theorems.
Consequences of these theorems are that the equational theory—the symmetric-transitive-reflexive closure—is a sound reasoning principle for these languages.
Reduction semantics are particularly useful given the ease by which evaluation contexts can model state or unusual control constructs (e.g., first-class continuations).
A thorough, modern treatment of reduction semantics that discusses several such applications at length is given by Matthias Felleisen, Robert Bruce Findler and Matthew Flatt in Semantics Engineering with PLT Redex.
Its intuitiveness makes it a popular choice for semantics specification in programming languages, but it has some drawbacks that make it inconvenient or impossible to use in many situations, such as languages with control-intensive features or concurrency.
[9] A big-step semantics describes in a divide-and-conquer manner how final evaluation results of language constructs can be obtained by combining the evaluation results of their syntactic counterparts (subexpressions, substatements, etc.).
Big-step semantics have the advantage of often being simpler (needing fewer inference rules) and often directly correspond to an efficient implementation of an interpreter for the language (hence Kahn calling them "natural".)
Both can lead to simpler proofs, for example when proving the preservation of correctness under some program transformation.
[10] The main disadvantage of big-step semantics is that non-terminating (diverging) computations do not have an inference tree, making it impossible to state and prove properties about such computations.
[10] Small-step semantics give more control over the details and order of evaluation.
In the case of instrumented operational semantics, this allows the operational semantics to track and the semanticist to state and prove more accurate theorems about the run-time behaviour of the language.