Partial information decomposition

Partial Information Decomposition is an extension of information theory, that aims to generalize the pairwise relations described by information theory to the interaction of multiple variables.

[1] Information theory can quantify the amount of information a single source variable

1

has about a target variable

via the mutual information

;

If we now consider a second source variable

, classical information theory can only describe the mutual information of the joint variable

}

In general however, it would be interesting to know how exactly the individual variables

and their interactions relate to

Consider that we are given two source variables

and a target variable

In this case the total mutual information

, while the individual mutual information

That is, there is synergistic information arising from the interaction of

, which cannot be easily captured with classical information theoretic quantities.

Partial information decomposition further decomposes the mutual information between the source variables

with the target variable

Unq

Unq

Syn

Red

{\displaystyle I(X_{1},X_{2};Y)={\text{Unq}}(X_{1};Y\setminus X_{2})+{\text{Unq}}(X_{2};Y\setminus X_{1})+{\text{Syn}}(X_{1},X_{2};Y)+{\text{Red}}(X_{1},X_{2};Y)}

Here the individual information atoms are defined as There is, thus far, no universal agreement on how these terms should be defined, with different approaches that decompose information into redundant, unique, and synergistic components appearing in the literature.

[1][2][3][4] Despite the lack of universal agreement, partial information decomposition has been applied to diverse fields, including climatology,[5] neuroscience[6][7][8] sociology,[9] and machine learning[10] Partial information decomposition has also been proposed as a possible foundation on which to build a mathematically robust definition of emergence in complex systems[11] and may be relevant to formal theories of consciousness.