Convolutional code

The sliding application represents the 'convolution' of the encoder over the data, which gives rise to the term 'convolutional coding'.

The ability to perform economical maximum likelihood soft decision decoding is one of the major benefits of convolutional codes.

This is in contrast to classic block codes, which are generally represented by a time-variant trellis and therefore are typically hard-decision decoded.

The memory is often called the "constraint length" K, where the output is a function of the current input as well as the previous

The performance of a punctured convolutional code generally scales well with the amount of parity transmitted.

It was thought that convolutional codes could be decoded with arbitrary quality at the expense of computation and delay.

Recursive systematic convolutional codes were invented by Claude Berrou around 1991.

Convolutional codes are used extensively to achieve reliable data transfer in numerous applications, such as digital video, radio, mobile communications (e.g., in GSM, GPRS, EDGE and 3G networks (until 3GPP Release 7)[3][4]) and satellite communications.

Prior to turbo codes such constructions were the most efficient, coming closest to the Shannon limit.

To convolutionally encode data, start with k memory registers, each holding one input bit.

Using the generator polynomials and the existing values in the remaining registers, the encoder outputs n symbols.

Codes with output symbols that do not include the input data are called non-systematic.

A convolutional encoder is called so because it performs a convolution of the input stream with the encoder's impulse responses: where x is an input sequence, yj is a sequence from output j, hj is an impulse response for output j and

Every output of an encoder can be described by its own transfer function, which is closely related to the generator polynomial.

All possible transitions can be shown as below: An actual encoded sequence can be represented as a path on this graph.

This diagram gives us an idea about decoding: if a received sequence doesn't fit this graph, then it was received with errors, and we must choose the nearest correct (fitting the graph) sequence.

It can be calculated as Since a convolutional code doesn't use blocks, processing instead a continuous bitstream, the value of t applies to a quantity of errors located relatively near to each other.

Free distance can be interpreted as the minimal length of an erroneous "burst" at the output of a convolutional decoder.

The popular solution for this problem is to interleave data before convolutional encoding, so that the outer block (usually Reed–Solomon) code can correct most of the errors.

For relatively small values of k, the Viterbi algorithm is universally used as it provides maximum likelihood performance and is highly parallelizable.

Viterbi decoders are thus easy to implement in VLSI hardware and in software on CPUs with SIMD instruction sets.

Both Viterbi and sequential decoding algorithms return hard decisions: the bits that form the most likely codeword.

An approximate confidence measure can be added to each bit by use of the Soft output Viterbi algorithm.

Maximum a posteriori (MAP) soft decisions for each bit can be obtained by use of the BCJR algorithm.

In fact, predefined convolutional codes structures obtained during scientific researches are used in the industry.

This relates to the possibility to select catastrophic convolutional codes (causes larger number of errors).

An especially popular Viterbi-decoded convolutional code, used at least since the Voyager program, has a constraint length K of 7 and a rate r of 1/2.

The convolutional code with a constraint length of 2 and a rate of 1/2 is used in GSM as an error correction technique.

The specific order of transmission is defined by the respective communication standard.

Punctured convolutional codes are widely used in the satellite communications, for example, in Intelsat systems and Digital Video Broadcasting.

Stages of channel coding in GSM. [ 2 ] Block encoder and Parity check – error detection part. Convolutional encoder and Viterbi decoder – error correction part. Interleaving and Deinterleaving – code words separation increasing in time domain and to avoid bursty distortions.
Img.1. Rate 1/3 non-recursive, non-systematic convolutional encoder with constraint length 3
Img.2. Rate 1/2 8-state recursive systematic convolutional encoder. Used as constituent code in 3GPP 25.212 Turbo Code.
Img. 3. Two-state recursive systematic convolutional (RSC) code. Also called an 'accumulator.'
Img. 4. Four-state recursive systematic convolutional (RSC) code.
Img. 5. Sixteen-state recursive systematic convolutional (RSC) code.
Img.6. A trellis diagram for the encoder on Img.1. A path through the trellis is shown as a red line. The solid lines indicate transitions where a "0" is input and the dashed lines where a "1" is input.
Theoretical bit-error rate curves of encoded QPSK (recursive and non-recursive, soft decision), additive white Gaussian noise channel. Curves are small distinguished due to approximately the same free distances and weights.
Bit error ratio curves for convolutional codes with different options of digital modulations ( QPSK, 8-PSK , 16-QAM, 64-QAM ) and LLR Algorithms. [ 8 ] [ 9 ] (Exact [ 10 ] and Approximate [ 11 ] ) over additive white Gaussian noise channel.
Shift-register for the (7, [171, 133]) convolutional code polynomial. Branches: , . All of the math operations should be done by modulo 2.
Theoretical bit-error rate curves of encoded QPSK (soft decision), additive white Gaussian noise channel. Longer constraint lengths produce more powerful codes, but the complexity of the Viterbi algorithm increases exponentially with constraint lengths, limiting these more powerful codes to deep space missions where the extra performance is easily worth the increased decoder complexity.
Convolutional codes with 1/2 and 3/4 code rates (and constraint length 7, Soft decision, 4-QAM / QPSK / OQPSK). [ 14 ]
A turbo code with component codes 13, 15. [ 16 ] Turbo codes get their name because the decoder uses feedback, like a turbo engine. Permutation means the same as the interleaving. C1 and C2 are recursive convolutional codes. Recursive and non-recursive convolutional codes are not so much different in BER performance, however, recursive type of is implemented in Turbo convolutional codes due to better interleaving properties. [ 17 ]