Properties of an execution of a computer program—particularly for concurrent and distributed systems—have long been formulated by giving safety properties ("bad things don't happen") and liveness properties ("good things do happen").
[1] A program is totally correct with respect to a precondition
Total correctness is a conjunction of a safety property and a liveness property:[2] Note that a bad thing is discrete,[3] since it happens at a particular place during execution.
[5] Moreover, undertaking the decomposition can be helpful, because the formal definitions enable a proof that different methods must be used for verifying safety properties versus for verifying liveness properties.
[6][7] A safety property proscribes discrete bad things from occurring during an execution.
[1] A safety property thus characterizes what is permitted by stating what is prohibited.
The requirement that the bad thing be discrete means that a bad thing occurring during execution necessarily occurs at some identifiable point.
[5] Examples of a discrete bad thing that could be used to define a safety property include:[5] An execution of a program can be described formally by giving the infinite sequence of program states that results as execution proceeds, where the last state for a terminating program is repeated infinitely.
denote the set of finite sequences of program states, and
denote the set of infinite sequences of program states.
[5] A property of a program is the set of allowed executions.
then the defining bad thing for that safety property occurs at some point in
We take this inference about the irremediability of bad things to be the defining characteristic for
[5] This formal definition for safety properties implies that if an execution
[1] The good thing need not be discrete—it might involve an infinite number of steps.
Examples of a good thing used to define a liveness property include:[5] The good thing in the first example is discrete but not in the others.
This is because a discrete bad thing is being proscribed: a partial execution that reaches a state where the answer still has not been produced and the value of the clock (a state variable) violates the bound.
Most of the time, knowing that a program eventually does some "good thing" is not satisfactory; we want to know that the program performs the "good thing" within some number of steps or before some deadline.
would be a "bad thing" and, thus, would be defining a safety property).
Lamport used the terms safety property and liveness property in his 1977 paper[1] on proving the correctness of multiprocess (concurrent) programs.
He borrowed the terms from Petri net theory, which was using the terms liveness and boundedness for describing how the assignment of a Petri net's "tokens" to its "places" could evolve; Petri net safety was a specific form of boundedness.
Lamport subsequently developed a formal definition of safety for a NATO short course on distributed systems in Munich.
The formal definition of safety given above appears in a paper by Alpern and Schneider;[5] the connection between the two formalizations of safety properties appears in a paper by Alpern, Demers, and Schneider.
That proof was inspired by Gordon Plotkin's insight that safety properties correspond to closed sets and liveness properties correspond to dense sets in a natural topology on the set
[11] Subsequently, Alpern and Schneider[12] not only gave a Büchi automaton characterization for the formal definitions of safety properties and liveness properties but used these automata formulations to show that verification of safety properties would require an invariant and verification of liveness properties would require a well-foundedness argument.
The correspondence between the kind of property (safety vs liveness) with kind of proof (invariance vs well-foundedness) was a strong argument that the decomposition of properties into safety and liveness (as opposed to some other partitioning) was a useful one—knowing the type of property to be proved dictated the type of proof that is required.