Channel capacity

Channel capacity, in electrical engineering, computer science, and information theory, is the theoretical maximum rate at which information can be reliably transmitted over a communication channel.

Following the terms of the noisy-channel coding theorem, the channel capacity of a given channel is the highest information rate (in units of information per unit time) that can be achieved with arbitrarily small error probability.

[1][2] Information theory, developed by Claude E. Shannon in 1948, defines the notion of channel capacity and provides a mathematical model by which it may be computed.

The key result states that the capacity of the channel, as defined above, is given by the maximum of the mutual information between the input and output of the channel, where the maximization is with respect to the input distribution.

[3] The notion of channel capacity has been central to the development of modern wireline and wireless communication systems, with the advent of novel error correction coding mechanisms that have resulted in achieving performance very close to the limits promised by channel capacity.

due to the identity which, in turn, induces a mutual information

The channel capacity is defined as where the supremum is taken over all possible choices of

Therefore Combining the two inequalities we proved, we obtain the result of the theorem: If G is an undirected graph, it can be used to define a communications channel in which the symbols are the graph vertices, and two codewords may be confused with each other if their symbols in each position are equal or adjacent.

The computational complexity of finding the Shannon capacity of such a channel remains open, but it can be upper bounded by another important graph invariant, the Lovász number.

[5] The noisy-channel coding theorem states that for any error probability ε > 0 and for any transmission rate R less than the channel capacity C, there is an encoding and decoding scheme transmitting data at rate R whose error probability is less than ε, for a sufficiently large block length.

Also, for any rate greater than the channel capacity, the probability of error at the receiver goes to 0.5 as the block length goes to infinity.

An application of the channel capacity concept to an additive white Gaussian noise (AWGN) channel with B Hz bandwidth and signal-to-noise ratio S/N is the Shannon–Hartley theorem: C is measured in bits per second if the logarithm is taken in base 2, or nats per second if the natural logarithm is used, assuming B is in hertz; the signal and noise powers S and N are expressed in a linear power unit (like watts or volts2).

To determine the channel capacity, it is necessary to find the capacity-achieving distribution

Research has mostly focused on studying additive noise channels under certain power constraints and noise distributions, as analytical methods are not feasible in the majority of other scenarios.

Hence, alternative approaches such as, investigation on the input support,[6] relaxations[7] and capacity bounds,[8] have been proposed in the literature.

The capacity of a discrete memoryless channel can be computed using the Blahut-Arimoto algorithm.

In fact, the channel capacity and the capacity-achieving distribution of any discrete-time continuous memoryless vector channel can be obtained using CORTICAL,[9] a cooperative framework inspired by generative adversarial networks.

CORTICAL consists of two cooperative networks: a generator with the objective of learning to sample from the capacity-achieving input distribution, and a discriminator with the objective to learn to distinguish between paired and unpaired channel input-output samples and estimates

For channel capacity in systems with multiple antennas, see the article on MIMO.

The capacity of the frequency-selective channel is given by so-called water filling power allocation, where

In a slow-fading channel, where the coherence time is greater than the latency requirement, there is no definite capacity as the maximum rate of reliable communications supported by the channel,

In a fast-fading channel, where the latency requirement is greater than the coherence time and the codeword length spans many coherence periods, one can average over many independent channel fades by coding over a large number of coherence time intervals.

Feedback capacity is the greatest rate at which information can be reliably transmitted, per unit time, over a point-to-point communication channel in which the receiver feeds back the channel outputs to the transmitter.

Shannon chose feedback as the subject of the first Shannon Lecture, delivered at the 1973 IEEE International Symposium on Information Theory in Ashkelon, Israel.

The feedback capacity is characterized by the maximum of the directed information between the channel inputs and the channel outputs, where the maximization is with respect to the causal conditioning of the input given the output.

The directed information was coined by James Massey[12] in 1990, who showed that its an upper bound on feedback capacity.

The basic mathematical model for a communication system is the following: Here is the formal definition of each element (where the only difference with respect to the nonfeedback capacity is the encoder definition): That is, for each time

Consider for instance the simple case on an autoregressive model noise process

The feedback capacity is difficult to solve in the general case.

There are some techniques that are related to control theory and Markov decision processes if the channel is discrete.

AWGN channel capacity with the power-limited regime and bandwidth-limited regime indicated. Here, ; B and C can be scaled proportionally for other values.
Communication with feedback