In data compression, a universal code for integers is a prefix code that maps the positive integers onto binary codewords, with the additional property that whatever the true probability distribution on integers, as long as the distribution is monotonic (i.e., p(i) ≥ p(i + 1) for all positive i), the expected lengths of the codewords are within a constant factor of the expected lengths that the optimal code for that probability distribution would have assigned.
A universal code is asymptotically optimal if the ratio between actual and optimal expected lengths is bounded by a function of the information entropy of the code that, in addition to being bounded, approaches 1 as entropy approaches infinity.
However, note that an asymptotically optimal universal code can be used on independent identically-distributed sources, by using increasingly large blocks, as a method of universal source coding.
For example, using unary coding on the Zeta distribution yields an expected length of On the other hand, using the universal Elias gamma coding for the Gauss–Kuzmin distribution results in an expected codeword length (about 3.51 bits) near entropy (about 3.43 bits)- Академия Google.
Each universal code, like each other self-delimiting (prefix) binary code, has its own "implied probability distribution" given by P(i)=2−l(i) where l(i) is the length of the ith codeword and P(i) is the corresponding symbol's probability.
Likewise, how close a code is to optimal can be measured by this divergence.
Lossless Data Compression Program: Hybrid LZ77 RLE For any geometric distribution (an exponential distribution on integers), a Golomb code is optimal.
With universal codes, the implicit distribution is approximately a power law such as
For the Fibonacci code, the implicit distribution is approximately
For the ternary comma code (i.e., encoding in base 3, represented with 2 bits per symbol), the implicit distribution is a power law with
These distributions thus have near-optimal codes with their respective power laws.