The notion of "irrelevance" and "given that we know" may obtain different interpretations, including probabilistic, relational and correlational, depending on the application.
The theory of graphoids characterizes these properties in a finite set of axioms that are common to informational irrelevance and its graphical representations.
Judea Pearl and Azaria Paz[1] coined the term "graphoids" after discovering that a set of axioms that govern conditional independence in probability theory is shared by undirected graphs.
Axioms for conditional independence in probability were derived earlier by A. Philip Dawid[2] and Wolfgang Spohn.
[1][7] A dependency model is a relational graphoid if it satisfies In words, the range of values permitted for X is not restricted by the choice of Y, once Z is fixed.
Independence statements belonging to this model are similar to embedded multi-valued dependencies (EMVDs) in databases.
In other words, there exists an undirected graph G such that every independence statement in M is reflected as a vertex separation in G and vice versa.
A necessary and sufficient condition for a dependency model to be a graph-induced graphoid is that it satisfies the following axioms: symmetry, decomposition, intersection, strong union and transitivity.
[11] This means that for every graph G there exists a probability distribution P such that every conditional independence in P is represented in G, and vice versa.
[12] Thomas Verma showed that every semi-graphoid has a recursive way of constructing a DAG in which every d-separation is valid.