GV-linear-code

In coding theory, the bound of parameters such as rate R, relative distance, block length, etc. is usually concerned. Here Gilbert–Varshamov bound theorem claims the lower bound of the rate of the general code. Gilbert–Varshamov bound is the best in term of relative distance for codes over alphabets of size less than 49.

Gilbert–Varshamov bound theorem

Theorem: Let q \ge 2. For every 0 \le \delta < 1 - \frac{1}{q}, and 0 < \varepsilon  \le 1 - H_q (\delta ), there exists a code with rate R \ge 1 - H_q (\delta ) - \varepsilon , and relative distance \delta.

Here H_q is the q-ary entropy function defined as follows:

H_q(x) = x\log_q(q-1)-x\log_qx-(1-x)\log_q(1-x).

The above result was proved by Edgar Gilbert for general code using the greedy method as here. For linear code, Rom Varshamov proved using the probabilistic method for the random linear code. This proof will be shown in the following part.

High-level proof:

To show the existence of the linear code that satisfies those constraints, the probabilistic method is used to construct the random linear code. Specifically the linear code is chosen randomly by choosing the random generator matrix G in which the element is chosen uniformly over the field \mathbb{F}_q^n . Also the Hamming distance of the linear code is equal to the minimum weight of the codeword. So to prove that the linear code generated by G has Hamming distance d, we will show that for any m \in \mathbb{F}_q^k \backslash \left\{ 0 \right\}, wt(mG) \ge d . To prove that, we prove the opposite one; that is, the probability that the linear code generated by G has the Hamming distance less than d is exponentially small in n. Then by probabilistic method, there exists the linear code satisfying the theorem.

Formal proof:

By using the probabilistic method, to show that there exists a linear code that has a Hamming distance greater than d, we will show that the probability that the random linear code having the distance less than d is exponentially small in n.

We know that the linear code is defined using the generator matrix. So we use the "random generator matrix" G as a mean to describe the randomness of the linear code. So a random generator matrix G of size kn contains kn elements which are chosen independently and uniformly over the field \mathbb{F}_q.

Recall that in a linear code, the distance = the minimum weight of the non-zero codeword. This fact is one of the properties of linear code.

Denote wt(y) be the weight of the codeword y. So


\begin{align}
P & = {\Pr}_{\text{random }G} [\text{linear code generated by }G\text{ has distance} < d] \\
& = {\Pr}_{\text{random }G} [\text{there exists a codeword }y \ne 0\text{ in a linear code generated by }G\text{ such that }\mathrm{wt}(y) < d]
\end{align}

Also if codeword y belongs to a linear code generated by G, then y = mG for some vector m \in \mathbb{F}_q^k.

Therefore P = {\Pr}_{\text{random }G} [\text{there exists a vector }m \in \mathbb{F}_q^k \backslash \{ 0\}\text{ such that }wt(mG) < d]

By Boole's inequality, we have:

P \le \sum\limits_{m \in \mathbb{F}_q^k \backslash \{ 0\} } {{\Pr}_{\text{random }G} } [wt(mG) < d]

Now for a given message m \in \mathbb{F}_q^k \backslash \{ 0\}, we want to compute W = {\Pr}_{\text{random }G} [wt(mG) < d]

Denote \Delta(m_1,m_2) be a Hamming distance of two messages m_1 and m_2

Then for any message m, we have: wt(m) = \Delta(0,m).

Using this fact, we can come up with the following equality:

W = \sum\limits_{\text{all }y \in \mathbb{F}_q^n \text{s.t. }\Delta (0,y) \le d - 1} {{\Pr}_{\text{random }G} [mG = y]}

Due to the randomness of G, mG is a uniformly random vector from \mathbb{F}_q^n.

So {\Pr}_{\text{random }G} [mG = y] = q^{ - n}

Let \text{Vol}_q(r,n) is a volume of Hamming ball with the radius r. Then:

W = \frac{\text{Vol}_q(d-1,n)}{q^n} \le \frac{\text{Vol}_q(\delta n,n)}{q^n} \le \frac{q^{nH_q(\delta)}}{q^n}

(The later inequality comes from the upper bound of the Volume of Hamming ball)

Then

 P \le q^k \cdot W \le q^k \frac{q^{nH_q(\delta)}}{q^n} = q^k q^{-n(1-H_q(\delta))}

By choosing k = (1-H_q(\delta)-\varepsilon)n, the above inequality becomes

 P \le q^{-\varepsilon n}

Finally q^{ - \varepsilon n}  \ll 1, which is exponentially small in n, that is what we want before. Then by the probabilistic method, there exists a linear code C with relative distance \delta and rate R at least (1-H_q(\delta)-\varepsilon), which completes the proof.

Comments

  1. The Varshamov construction above is not explicit; that is, it does not specify the deterministic method to construct the linear code that satisfies the Gilbert–Varshamov bound. The naive way that we can do is to go over all the generator matrices G of size kn over the field \mathbb{F}_q and check if that linear code has the satisfied Hamming distance. That leads to the exponential time algorithm to implement it.
  2. We also have a Las Vegas construction that takes a random linear code and checks if this code has good Hamming distance. Nevertheless, this construction has the exponential running time.

See also

  1. Gilbert–Varshamov bound due to Gilbert construction for the general code
  2. Hamming Bound
  3. Probabilistic method

References

  1. Lecture 11: Gilbert–Varshamov Bound. Coding Theory Course. Professor Atri Rudra
  2. Lecture 9: Bounds on the Volume of Hamming Ball. Coding Theory Course. Professor Atri Rudra
  3. Coding Theory's Notes: Gilbert–Varshamov Bound. Venkatesan �Guruswami
This article is issued from Wikipedia - version of the Thursday, October 15, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.