BCH codes were invented in 1959 by French mathematician Alexis Hocquenghem, and independently in 1960 by Raj Chandra Bose and D. K.
[1][2][3] The name Bose–Chaudhuri–Hocquenghem (and the acronym BCH) arises from the initials of the inventors' surnames (mistakenly, in the case of Ray-Chaudhuri).
In particular, it is possible to design binary BCH codes that can correct multiple bit errors.
This simplifies the design of the decoder for these codes, using small low-power electronic hardware.
For any positive integer i, let mi(x) be the minimal polynomial with coefficients in GF(q) of αi.
The generator polynomial of the BCH code is defined as the least common multiple g(x) = lcm(m1(x),…,md − 1(x)).
has the generator polynomial It has minimal Hamming distance at least 3 and corrects up to one error.
has the generator polynomial It has minimal Hamming distance at least 5 and corrects up to two errors.
(This particular generator polynomial has a real-world application, in the "format information" of the QR code.)
and higher has the generator polynomial This code has minimal Hamming distance 15 and corrects 7 errors.
The BCH code itself is not prescriptive about the meaning of the coefficients of the polynomial; conceptually, a BCH decoding algorithm's sole concern is to find the valid codeword with the minimal Hamming distance to the received codeword.
This encoding method leverages the fact that subtracting the remainder from a dividend results in a multiple of the divisor.
The advantage to the systematic coding is that the receiver can recover the original message by discarding everything after the first
The most common ones follow this general outline: During some of these steps, the decoding algorithm may determine that the received vector has too many errors and cannot be corrected.
If the received vector has more errors than the code can correct, the decoder may unknowingly produce an apparently valid message that is not the one that was sent.
Examining the syndrome values thus isolates the error vector so one can begin to solve for it.
If there are two or more errors, It is not immediately obvious how to begin solving the resulting syndromes for the unknowns
will yield the positions where errors occur in the received word; hence the name 'error locator' polynomial.
can be determined by solving the linear system However, there is a more efficient method known as the Forney algorithm.
, then error values are For narrow-sense BCH codes, c = 1, so the expression simplifies to: It is based on Lagrange interpolation and techniques of generating functions.
Let us run extended Euclidean algorithm for locating least common divisor of polynomials
The goal is to find a codeword which differs from the received word minimally as possible on readable positions.
For the goal of locating error positions we could change the set of syndromes in the similar way to reflect all unreadable characters.
denotes the polynomial eliminating the influence of these coordinates, we obtain In Euclidean algorithm, we try to correct at most
Fail could be detected as well by Forney formula returning error outside the transmitted alphabet.
Next, apply the Peterson procedure by row-reducing the following augmented matrix.
Due to the zero row, S3×3 is singular, which is no surprise since only two errors were introduced into the codeword.
We replace the unreadable characters by zeros while creating the polynomial reflecting their positions
Let us show the algorithm behaviour for the case with small number of errors.
Again, replace the unreadable characters by zeros while creating the polynomial reflecting their positions