Slepian–Wolf coding

In information theory and communication, the Slepian–Wolf coding, also known as the Slepian–Wolf bound, is a result in distributed source coding discovered by David Slepian and Jack Wolf in 1973.

It is a method of theoretically coding two lossless compressed correlated sources.

[1] Distributed coding is the coding of two, in this case, or more dependent sources with separate encoders and a joint decoder.

Given two statistically dependent independent and identically distributed finite-alphabet random sequences

, the Slepian–Wolf theorem gives a theoretical bound for the lossless coding rate for distributed coding of the two sources.

The bound for the lossless coding rates as shown below:[1]

If both the encoder and the decoder of the two sources are independent, the lowest rate it can achieve for lossless compression is

However, with joint decoding, if vanishing error probability for long sequences is accepted, the Slepian–Wolf theorem shows that much better compression rate can be achieved.

is larger than their joint entropy

and none of the sources is encoded with a rate smaller than its entropy, distributed coding can achieve arbitrarily small error probability for long sequences.

[1] A special case of distributed coding is compression with decoder side information, where source

In other words, two isolated sources can compress data as efficiently as if they were communicating with each other.

[1] This bound has been extended to the case of more than two correlated sources by Thomas M. Cover in 1975,[2] and similar results were obtained in 1976 by Aaron D. Wyner and Jacob Ziv with regard to lossy coding of joint Gaussian sources.

This article related to telecommunications is a stub.