Block code

From Wikipedia, the free encyclopedia

In computer science, a block code is a type of channel coding. It adds redundancy to a message so that, at the receiver, one can decode with minimal (theoretically zero) errors, provided that the information rate (amount of transported information in bits per sec) would not exceed the channel capacity.

The main characterisation of a block code is that it is a fixed length channel code (unlike source coding schemes such as Huffman coding, and unlike channel coding methods like convolutional encoding). Typically, a block code takes a k-digit information word, and transforms this into an n-digit codeword.

Block coding was the primary type of channel coding used in earlier mobile communication systems.

[edit] Formal definition

A block code is a code which encodes strings formed from an alphabet set S into code words by encoding each letter of S separately. Let (k_1,k_2,\ldots,k_m) be a sequence of natural numbers each less than | S | . If S={s_1,s_2,\ldots,s_n} and a particular word W is written as W=s_{k_1}s_{k_2}\ldots s_{k_m}, then the code word corresponding to W, namely C(W), is

C(W) = C(s_{k_1})C(s_{k_2})\ldots C(s_{k_m}).

[edit] A[n,d]

The trade-off between efficiency (large informationrate) and correction capabilities can also be seen from the attempt to, given a fixed codeword length and a fixed correction capability (represented by the Hamming distance d) maximize the total amount of codewords. A[n,d] is the maximum number of codewords for a given codeword length n and Hamming distance d.

[edit] Information rate

When C is a binary block code, consisting of A codewords of length n bits, then the information rate of C is defined as

\frac{\!^{2}log(A)}{n}.

When f.i. the first k bits of a codeword are independent informationbits, then the information rate is

\frac{\!^{2}log(2^k)}{n}=\frac{k}{n}.
In other languages