The main idea behind list decoding is that the decoding algorithm instead of outputting a single possible message outputs a list of possibilities one of which is correct.
The unique decoding model in coding theory, which is constrained to output a single valid codeword from the received word could not tolerate a greater fraction of errors.
This resulted in a gap between the error-correction performance for stochastic noise models (proposed by Shannon) and the adversarial noise model (considered by Richard Hamming).
Since the mid 90s, significant algorithmic progress by the coding theory community has bridged this gap.
The way the channel noise is modeled plays a crucial role in that it governs the rate at which reliable communication is possible.
There are two main schools of thought in modeling the channel behavior: The highlight of list-decoding is that even under adversarial noise conditions, it is possible to achieve the information-theoretic optimal trade-off between rate and fraction of errors that can be corrected.
Hence, in a sense this is like improving the error-correction performance to that possible in case of a weaker, stochastic noise model.
As a result, the half-the minimum distance acts as a combinatorial barrier beyond which unambiguous error-correction is impossible, if we only insist on unique decoding.
considered above occur only in the worst-case and if one looks at the way Hamming balls are packed in high-dimensional space, even for error patterns
This claim has been shown to hold with high probability for a random code picked from a natural ensemble and more so for the case of Reed–Solomon codes which is well studied and quite ubiquitous in the real world applications.
In fact, Shannon's proof of the capacity theorem for q-ary symmetric channels can be viewed in light of the above claim for random codes.
Under the mandate of list-decoding, for worst-case errors, the decoder is allowed to output a small list of codewords.
With some context specific or side information, it may be possible to prune the list and recover the original transmitted codeword.
For a polynomial-time list-decoding algorithm to exist, we need the combinatorial guarantee that any Hamming ball of radius
This is because the list size itself is clearly a lower bound on the running time of the algorithm.
A combinatorial consequence of this requirement is that it imposes an upper bound on the rate of a code.
This is a substantial gain compared to the unique decoding model as we now have the potential to correct twice as many errors.
This is an information-theoretic lower bound on the number of correct symbols required to perform decoding and with list decoding, we can potentially achieve this information-theoretic limit.
The relation between list decodability of a code and other fundamental parameters such as minimum distance and rate have been fairly well studied.
In other words, the Johnson bound rules out the possibility of having a large number of codewords in a Hamming ball of radius slightly greater than
What this means is that for rates approaching the channel capacity, there exists list decodable codes with polynomial sized lists enabling efficient decoding algorithms whereas for rates exceeding the channel capacity, the list size becomes exponential which rules out the existence of efficient decoding algorithms.
Also, the proof for list-decoding capacity is an important result that pin points the optimal trade-off between rate of a code and the fraction of errors that can be corrected under list decoding.
The inequality in the above relation follows from the upper bound on the volume of a Hamming ball.
gives a very good estimate on the volume of a Hamming ball of radius
With the above in mind, the probability of "any" bad event happening can be shown to be less than
Now turning to the proof of part (ii), we need to show that there are super-polynomially many codewords around every
such that Taking the expectation of the volume of a Hamming ball we have Therefore, by the probabilistic method, we have shown that if the rate exceeds the list-decoding capacity, then the list size becomes super-polynomially large.
In 2023, building upon three seminal works,[1][2][3] coding theorists showed that, with high probability, Reed-Solomon codes defined over random evaluation points are list decodable up to the list-decoding capacity over linear size alphabets.
In the period from 1995 to 2007, the coding theory community developed progressively more efficient list-decoding algorithms.
With the above formulation, the general structure of list-decoding algorithms for Reed-Solomon codes is as follows: Step 1: (Interpolation) Find a non-zero bivariate polynomial