The logical formalization of belief revision is researched in philosophy, in databases, and in artificial intelligence for the design of rational agents.
What makes belief revision non-trivial is that several different ways for performing this operation may be possible.
In the case of revision, this principle enforces as much information as possible to be preserved by the change.
The following classical example shows that the operations to perform in the two settings of update and revision are not the same.
[2] The AGM postulates are equivalent to several different conditions on the revision operator; in particular, they are equivalent to the revision operator being definable in terms of structures known as selection functions, epistemic entrenchments, systems of spheres, and preference relations.
represents an order of implausibility among all situations, including those that are conceivable but yet currently considered false.
However, if the language of formulae representing beliefs itself includes the counterfactual conditional connective
Vice versa, conditions that have been considered for non-monotonic inference relations can be translated into postulates for a revision operator.
This distinction is instead done by the foundational approach to belief revision, which is related to foundationalism in philosophy.
This name has been chosen because the coherentist approach aims at restoring the coherence (consistency) among all beliefs, both self-standing and derived ones.
Foundationalist revision operators working on non-deductively closed belief sets typically select some subsets of
A number of proposals for revision and update based on the set of models of the involved formulae were developed independently of the AGM framework.
Revision can therefore be performed on the sets of possible worlds rather than on the corresponding knowledge bases.
The revision and update operators based on models are usually identified by the name of their authors: Winslett, Forbus, Satoh, Dalal, Hegner, and Weber.
Indeed, the preference relation should depend on the previous history of revisions, rather than on the resulting knowledge base only.
More generally, a preference relation gives more information about the state of mind of an agent than a simple knowledge base.
Since the basic condition of the preference ordering is that their minimal models are exactly the models of their associated knowledge base, a knowledge base can be considered implicitly represented by a preference ordering (but not vice versa).
Specific iterated revision operators have been proposed by Spohn, Boutilier, Williams, Lehmann, and others.
When merging a number of knowledge bases with the same degree of plausibility, a distinction is made between arbitration and majority.
Many revision proposals involve orderings over models representing the relative plausibility of the possible alternatives.
This is similar with what is done in social choice theory, which is the study of how the preferences of a group of agents can be combined in a rational way.
Belief revision and social choice theory are similar in that they combine a set of orderings into one.
They differ on how these orderings are interpreted: preferences in social choice theory; plausibility in belief revision.
Another difference is that the alternatives are explicitly enumerated in social choice theory, while they are the propositional models over a given alphabet in belief revision.
From the point of view of computational complexity, the most studied problem about belief revision is that of query answering in the propositional case.
Explicitly storing the relation as a set of pairs of models is instead not a compact representation of preference because the space required is exponential in the number of propositional letters.
The complexity of query answering and model checking in the propositional case is in the second level of the polynomial hierarchy for most belief revision operators and schemas.
New breakthrough results that demonstrate how relevance can be employed in belief revision have been achieved.
Williams, Peppas, Foo and Chopra reported the results in the Artificial Intelligence journal.
[5] Belief revision has also been used to demonstrate the acknowledgement of intrinsic social capital in closed networks.