Berlekamp–Welch algorithm

From Wikipedia, the free encyclopedia

The Berlekamp–Welch algorithm, also known as the Welch–Berlekamp algorithm, is named for Elwyn R. Berlekamp and Lloyd R. Welch. The algorithm efficiently corrects errors in BCH codes and Reed–Solomon codes (which are a subset of BCH codes). Unlike many other decoding algorithms, and in correspondence with the code-domain Berlekamp–Massey algorithm that uses syndrome decoding and the dual of the codes, the Berlekamp–Welch decoding algorithm provides a method for decoding Reed–Solomon codes using just the generator matrix and not syndromes.

History on decoding Reed–Solomon codes

  1. In 1960, Peterson came up with an algorithm for decoding BCH codes.[1][2] His algorithm solves the important second stage of the generalized BCH decoding procedure and is used to calculate the error locator polynomial coefficients that in turn provide the error locator polynomial. This is crucial to the decoding of BCH codes.
  2. In 1963, Gorenstein–Zierler saw that BCH codes and Reed–Solomon codes have a common generalization and that the decoding algorithm extends to more general situation.
  3. In 1968 / 69, Elwyn Berlekamp invented an algorithm for decoding BCH codes. James Massey recognized its application to linear feedback shift registers and simplified the algorithm.[3][4] Massey termed the algorithm the LFSR Synthesis Algorithm (Berlekamp Iterative Algorithm) but it is now known as the Berlekamp–Massey algorithm.
  4. In 1986, The Welch–Berlekamp algorithm was developed to solve the decoding equation of Reed–Solomon codes, using a fast method to solve a certain polynomial equation. The Berlekamp – Welch algorithm has a running time complexity of {\mathcal  {O}}(N^{3}). We will in the following sections look at the Gemmel and Sudan’s exposition of the Berlekamp Welch Algorithm.[5]

Error locator polynomial of Reed–Solomon codes

In the problem of decoding Reed–Solomon codes, the inputs are pair wise distinct evaluation points \alpha _{i}’s (i = 1, . . ., n) where \alpha _{i}\in {\mathbb  {F}} with dimension K and distance D=N-K+1 and a codeword y = (y_{1},\ldots ,y_{n})\in {\mathbb  {F}}_{n}. Our goal is to describe an algorithm that can correct e<{N-K+1 \over 2} many errors in polynomial time. To do so we have to find a polynomial P over {\mathbb  {F}} such that P has degree less than k-1 and (the number of i’s such that P(\alpha _{i})\neq y_{i}\leq e. We can assume that there exists a polynomial P(X) such that \Delta (y,(P(\alpha _{i}))_{{i=1}}^{N})e\leq {D \over 2} or {N-K+1 \over 2}.

Note that the coefficients of P are the encoded information. To solve this, we use an indicator for those i’s where an error may have occurred. Thus we define E(X), which is an error locator polynomial over {\mathbb  {F}} such that E(\alpha _{i})=0 if y_{i}\neq P(\alpha _{i}) and the degree of E can be given by: E\leq {n-k \over 2}.

E(X)=\prod _{{\alpha _{i}\in S}}(X-\alpha _{i}) where S=\{\alpha _{i}|P(\alpha _{i})\neq y_{i}\}

We can also claim that for every 1\leq i\leq N, y_{i}E(\alpha _{i})=P(\alpha _{i})E(\alpha _{i}). This fact holds true because in the event of y_{i}\neq P(\alpha _{i}), both sides of the above equation become 0 because E(\alpha _{i})=0.

However since both E(X) and P(X) are unknown, the main task of the decoding algorithm would be to find P(X). To do this we use a seemingly useless yet very powerful method and define another polynomial Q(X) as Q(X) = P(X)E(X). This is because the n equations with e+k we need to solve are quadratic in nature. Thus by defining a product of two variables that gives rise to a quadratic term as one unknown variable, we increase the number of unknowns but make the equations linear in nature. This method is called linearization[6] and is a very powerful tool.

Thus Q(X) is a polynomial over {\mathbb  {F}} having the properties:

  1. \deg(Q)\leq {{n-k \over 2}+k-1}
  2. \forall Q(\alpha _{i})=E(\alpha _{i})y_{i}

This helps because if we now manage to find Q(X) and E(X), we can easily find P(X) using P(X)={Q(X) \over E(X)}. The main purpose of the Berlekamp Welch algorithm is to find out P(X) using degree bounded polynomials Q(X) and E(X) and the properties of E and N.

Computing E(X) is as hard as finding the end solution, polynomial P(X). Once E(X) is computed, using erasure decoding for Reed–Solomon codes, we can easily recover P(X). However in a few cases, even the polynomial Q(X) is as hard to find as E(X). As an example, given Q(X) and y (such that y_{i}\neq 0 for 1\leq i\leq n), by checking positions where Q(i)=0, we can find the error locations. Thus the algorithm works on the principle that while each of the polynomials E(X) and Q(X) are hard to find individually; computing them together is much easier.

The Berlekamp–Welch decoder and algorithm

The Welch–Berlekamp decoder for Reed–Solomon codes consists of the Welch– Berlekamp algorithm augmented by some additional steps that prepare the received word for the algorithm and interpret the result of the algorithm.

The inputs given to the Berlekamp Welch decoder are the integers denoting Block Length N, the number of errors e such that e < {N-K+1 \over 2}, and the received word (y_{i},\alpha _{i})_{{i=1}}^{N} satisfying the condition that there exists at most one P(X) with deg(P(X))\leq {k-1} with \Delta (y,{P(\alpha _{i})_{i}})\leq e.

The output of the decoder is either the polynomial P(X), or in some cases, a failure. This decoder functions in two steps as follows:

  1. This step is called the interpolation step in which the decoder computes a non zero polynomial E(X) of degree e and another polynomial Q(X) with \deg(Q(X))\leq {e+K-1}. These polynomials are created such that the condition y_{i}E(\alpha _{i})=Q(\alpha _{i}) for all 1\leq i\leq n. In the case that polynomials satisfying the above condition cannot be computed, the output of the decoder would be a failure.
  2. If E(X) divides Q(X), then a P(X) is defined which equals {Q(X) \over E(X)}. If \Delta ((y,(P(\alpha _{i})_{i})\leq e), then the decoder outputs P(X). If the above condition is not satisfied, i.e. if E(X) does not divide Q(X)then a failure is returned by the decoder.

According to the algorithm, in the cases where it does not output a failure, it outputs a P(X) that is the correct and desired polynomial. To prove that, the algorithm always outputs the desired polynomial, we need to prove a few claims we have made while describing the algorithm. Let us go ahead and do so now.

Claim 1: There exist a pair of polynomials E(X) and Q(X) that satisfy Step 1 of the BW algorithm such that {Q(X) \over E(X)}=P(X).

Let E(x) be the error-locating polynomial for P(X) such that E(X)=X^{{e-\Delta (y,P(\alpha _{i})_{i})}}\prod _{{1\leq i\leq n|y_{i}\neq P(\alpha _{i})}}(X-\alpha _{i})and let Q(X)=P(X)E(X). Note that deg(Q(X))\leq {deg(P(X))+deg(E(X))}\leq {e+k-1}. We also stated that E(X) is a polynomial of degree exactly e. Note that E(X) is a polynomial following the property that E(\alpha _{i})=0 if and only if y_{i}\neq P(\alpha _{i}).We can now state that E(X) and Q(X) satisfy the equation y_{i}E(\alpha _{i})=Q(\alpha _{i}) from the first step of the BW algorithm. If E(\alpha _{i})=0, then Q(\alpha _{i})=P(\alpha _{i})E(\alpha _{i})=y_{i}E(\alpha _{i})=0. However whenever E(\alpha _{i})\neq 0, we can easily state that P(\alpha _{i})=y_{i} and therefore also state that P(\alpha _{i})E(\alpha _{i})=y_{i}E(\alpha _{i}) just as we claimed.

This above claim however just reiterates and proves the fact that there exists a pair of polynomials E(X) and Q(X) such that P(X) = Q(X)/E(X). It however does not necessarily guarantee the fact that the algorithm we discussed above would indeed output such a pair of polynomials. We therefore move on to look at another claim that helps establish this fact using the above claim and thereby proving the correctness of the algorithm.

Claim 2: For any two distinct solutions (E_{1}(X),Q_{1}(X))\neq (E_{2}(X),Q_{2}(X)) that satisfy the first step of the Berlekamp Welch algorithm given above, they will also satisfy the equation {Q_{1}(X) \over E_{1}(X)}={Q_{2}(X) \over E_{2}(X)}

The total degrees of the polynomials Q_{1}(X)E_{1}(X) and Q_{2}(X)E_{2}(X)\leq {2e+k-1}. We define another polynomial R(X)=Q_{1}(X)E_{2}(X)-Q_{2}(X)E_{1}(X) ....................................(i)

Note that R(X) such that deg(R(X))\leq {2e+k-1}. From step 1 of the Berlekamp Welch algorithm we also know that y_{i}E_{1}(\alpha _{i})=Q_{1}(\alpha _{i}) and y_{i}E_{2}(\alpha _{i})=Q_{2}(\alpha _{i}) ........…..........(ii)

Now, substituting the values of Q(X) from equation (ii) into equation (i), we get: R(\alpha _{i})=y_{i}E_{1}(\alpha _{i})E_{2}(\alpha _{i})-y_{i}E_{2}(\alpha _{i})E_{1}(\alpha _{i})=0 for 1\leq i\leq n.

Thus, the above polynomial R(X) has n roots and deg(R(X))\leq {2e+k-1} which implies that deg(R(X)) < n because of the upper bound on e. Since deg(R(X)) < n, we can come to the conclusion that the polynomials Q_{1}(X)E_{2}(X) and Q_{2}(X)E_{1}(X) agree on more points than their degree, and hence they are identical. Note that since E_{1}(X)\neq 0 and E_{2}(X)\neq 0, it can be implied that {Q_{1}(X) \over E_{1}(X)}={Q_{2}(X) \over E_{2}(X)} as per our initial claim.

Thus based on the above claims, we can safely state that the output of the Berlekamp Welch algorithm, when outputting the polynomial P(X) is correct.

We can now claim that the algorithm can be implemented such that it has a running time of O(n^{3}). This can be proved as follows: In Step 1 of the algorithm, the polynomials Q(X) and E(X) have e+k and e+1 unknown values respectively and the constraints y_{i}E(\alpha _{i})=Q(\alpha _{i}) for all 1\leq i\leq n acts as a linear equation with these unknowns. We therefore get a system of n linear equations in 2e+k+1 < n+2 unknowns. Using our first claim, this system of equations has a solution since the degree of polynomial E(X) is e. This can be solved in O(n^{3}) time, by say Gaussian elimination. Finally, we can note that Step 2 of the algorithm can also be implemented in time O(n^{3}) by "long division" method. Hence we can state that the Berlekamp Welch algorithm can be used to uniquely decode any [n,k]_{q} Reed–Solomon code in O(n^{3}) time for a maximum of {{n-k+1} \over 2} errors.

Example

The error locator polynomial serves to "neutralize" errors in P by making Q zero at those points, so that the system of linear equations is not affected by the inaccuracy in the input.

Consider a simple example where a redundant set of points are used to represent the line y=5-x, and one of the points is incorrect. The points that the algorithm gets as an input are (1,4),(2,3),(3,4),(4,1), where (3,4) is the defective point. The algorithm must solve the following system of equations:

{\begin{alignedat}{1}Q(1)&=4*E(1)\\Q(2)&=3*E(2)\\Q(3)&=4*E(3)\\Q(4)&=1*E(4)\\\end{alignedat}}


Given a solution Q and E to this system of equations, it is evident that at any of the points x=1,2,3,4 one of the following must be true: either Q(x_{i})=E(x_{i})=0, or P(x_{i})={Q(x_{i}) \over E(x_{i})}=y_{i}. Since E is defined as only having a degree of one, the former can only be true in one point. Therefore, P(x_{i}) must equal y_{i} at the three other points.

Letting E(x)=x+e_{0} and Q(x)=q_{0}+q_{1}x+q_{2}x^{2} and bringing E(x) to the left, we can rewrite the system thus:

{\begin{alignedat}{10}q_{0}&+&q_{1}&+&q_{2}&-&4e_{0}&-&4&=&0\\q_{0}&+&2q_{1}&+&4q_{2}&-&3e_{0}&-&6&=&0\\q_{0}&+&3q_{1}&+&9q_{2}&-&4e_{0}&-&12&=&0\\q_{0}&+&4q_{1}&+&16q_{2}&-&e_{0}&-&4&=&0\end{alignedat}}


This system can be solved through Gaussian elimination, and gives the values:

q_{0}=-15,q_{1}=8,q_{2}=-1,e_{0}=-3


Thus, Q(x)=-x^{2}+8x-15,E(x)=x-3. Dividing the two gives:

{Q(x) \over E(x)}=P(x)=5-x


5-x fits three of the four points given, so it is the most likely to be the original polynomial.

See also

References

  1. Berlekamp, Elwyn R. (1967), Nonbinary BCH decoding, International Symposium on Information Theory, San Remo, Italy 
  2. Berlekamp, Elwyn R. (1984) [1968], Algebraic Coding Theory, Laguna Hills, CA: Aegean Park Press, ISBN 0-89412-063-8  Unknown parameter |ed= ignored (help). Previous publisher McGraw–Hill, New York, NY.
  3. Massey, J. L. (1969), "Shift-register synthesis and BCH decoding", IEEE Trans. Information Theory, IT-15 (1): 122127 
  4. Ben Atti, Nadia; Diaz-Toca, Gema M.; Lombardi, Henri, The BerlekampMassey Algorithm revisited, CiteSeerX: 10.1.1.96.2743 
  5. Highly resilient correctors for polynomials – Peter Gemmel and Madhu Sudan's Exposition.
  6. A provable example of the linearization method – Dick Lipton

External links

This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.