Talk:Hamming code
From Wikipedia, the free encyclopedia
[edit] Error Correction with Hamming Codes
Forward Error Correction (FEC), the ability of receiving station to correct a transmission error, can increase the throughput of a data link operating in a noisy environment. The transmitting station must append information to the data in the form of error correction bits, but the increase in frame length may be modest relative to the cost of re transmission. (sometimes the correction takes too much time and we prefer to re transmit). Hamming codes provide for FEC using a "block parity" mechanism that can be inexpensively implemented. In general, their use allows the correction of single bit errors and detection of two bit errors per unit data, called a code word.
The fundamental principal embraced by Hamming codes is parity. Hamming codes, as mentioned before, are capable of correcting one error or detecting two errors but not capable of doing both simultaneously. You may choose to use Hamming codes as an error detection mechanism to catch both single and double bit errors or to correct single bit error. This is accomplished by using more than one parity bit, each computed on different combination of bits in the data.
The number of parity or error check bits required is given by the Hamming rule, and is a function of the number of bits of information transmitted. The Hamming rule is expressed by the following inequality:
p d + p + 1 < = 2 (1)
Where d is the number of data bits and p is the number of parity bits. The result of appending the computed parity bits to the data bits is called the Hamming code word. The size of the code word c is obviously d+p, and a Hamming code word is described by the ordered set (c,d).
Codes with values of p< =2 are hardly worthwhile because of the overhead involved. The case of p=3 is used in the following discussion to develop a (7,4) code using even parity, but larger code words are typically used in applications. A code where the equality case of Equation 1 holds is called a perfect code of which a (7,4) code is an example.
A Hamming code word is generated by multiplying the data bits by a generator matrix G using modulo-2 arithmetic. This multiplication's result is called the code word vector (c1,c2.c3,.....cn), consisting of the original data bits and the calculated parity bits.
The generator matrix G used in constructing Hamming codes consists of I (the identity matrix) and a parity generation matrix A:
G = [ I : A ]
An example of Hamming code generator matrix:
1 0 0 0 | 1 1 1 G = 0 1 0 0 | 0 1 1 0 0 1 0 | 1 0 1 0 0 0 1 | 1 1 0
The multiplication of a 4-bit vector (d1,d2,d3,d4) by G results in a 7-bit code word vector of the form (d1,d2,d3,d4,p1,p2,p3). It is clear that the A partition of G is responsible for the generation of the actual parity bits. Each column in A represents one parity calculation computed on a subset of d. The Hamming rule requires that p=3 for a (7,4) code, therefore A must contain three columns to produce three parity bits.
If the columns of A are selected so each column is unique, it follows that (p1,p2,p3) represents parity calculations of three distinct subset of d. As shown in the figure below, validating the received code word r, involves multiplying it by a parity check to form s, the syndrome or parity check vector.
T
H = [A | I] |1| |0| | 1 0 1 1 | 1 0 0 | |0| |0| | 1 1 0 1 | 0 1 0 | * |1| = |0| | 1 1 1 0 | 0 0 1 | |0| |0| |0| |1|
H*r = s
If all elements of s are zero, the code word was received correctly. If s contains non-zero elements, the bit in error can be determined by analyzing which parity checks have failed, as long as the error involves only a single bit.
For instance if r=[1011001], s computes to [101], that syndrome ([101]) matches to the third column in H that corresponds to the third bit of r - the bit in error.
OPTIMAL CODING From the practical standpoint of communications, a (7,4) code is not a good choice, because it involves non-standard character lengths. Designing a suitable code requires that the ratio of parity to data bits and the processing time involved to encode and decode the data stream be minimized, a code that efficiently handles 8-bit data items is desirable. The Hamming rule shows that four parity bits can provide error correction for five to eleven data bits, with the latter being a perfect code. Analysis shows that overhead introduced to the data stream is modest for the range of data bits available (11 bits 36% overhead, 8 bits 50% overhead, 5 bits 80% overhead).
A (12,8) code then offers a reasonable compromise in the bit stream . The code enables data link packets to be constructed easily by permitting one parity byte to serve two data bytes.
[edit] Link back to chanel coding?
This is a form of channel coding, so should this link back to there? - Mohammed Petiwala(MADDY)
[edit] (7,4)?
Maury, I like your rewrite a lot, but one thing jumps out immediately at me: isn't the Hamming code you describe a (7,4) rather than a (7,3)? You indicate earlier that Hamming's nomenclature classifies codes as ( total number of bits in a word , number of those bits that are data ). Well, if the powers of two are our parity bits, that's 1, 2 and 4, right? Leaving four data bits, at 3, 5, 6, and 7? -- Antaeus Feldspar 04:01, 30 Sep 2004 (UTC)
3, 5, 6, 7 <- 4 bits that are data ;-) -- KTC 00:00, 24 February 2005 (UTC)
KTC, that is exactly what I wrote. If you look at the article as it was at the time I wrote the above comment, you'll see that it described the standard Hamming code, the one that's meant if you just say "Hamming code", as a (7,3) code. It has since been changed to (7,4). I am not sure what you are trying to indicate. -- Antaeus Feldspar 01:25, 24 Feb 2005 (UTC)
My bad. I didn't read properly and thought you meant shouldn't (7,4) be (7,3). Ignore me ;-D -- KTC 23:01, 24 Feb 2005 (UTC)
[edit] Wow
I just spotted the new Example section. It's really beautiful. Good job to the contributor on that. Deco 21:10, 26 Jun 2005 (UTC)
[edit] Incorrect He matrix
I think the He matrix in (7,4) example is incorrect. It's last three lines should be:
0 1 1 1
1 0 1 1
1 1 0 1
Zurlich 13:27, 15 January 2006 (UTC)
- I think you're right. Good catch. I've (hopefully) fixed it, now. Explanation: if you xor the rows in He as selected by each line (in turn) from Hd, you should get [0 0 0 0] each time. I.e. for the middle row of Hd, you are looking at rows 2, 3, 6 and 7 from He. It's probably possible to fix the example by changing Hd, instead. StuartBrady 14:45, 15 January 2006 (UTC)
[edit] Incorrect Hd matrix?
Looking at the Hd matrix:
The top row corresponds to bits 4-7. The middle row corresponds to bits 2-3 and 6-7. The bottom row corresponds to bits 1, 3, 5 and 7. So should the top and bottom rows be swapped? StuartBrady 15:22, 15 January 2006 (UTC)
[edit] Real Hamming Matrices?
Does the author of the (7,4) code example have a reference? It appears to contradict Hamming's 1950 paper, as well as the other example on the page.
Hamming placed the data bits in positions 3, 5, 6, 7. But this matrix places them in positions 1, 2, 3, 4. Is this still considered a "Hamming code" in modern terminiology? To match Hamming's original description, the matrix should have the rows in a different order:
If both versions are commonly accepted, then there needs to be an explanation to this effect in the article. Scottcraig 07:33, 18 February 2006 (UTC)
- Teletext places data bits in 2, 4, 6, 8. The literature still refers to it as a Hamming code (even though it uses an extra parity bit). The ordering of the rows isn't important, so long as He and Hd are consistent. Note that may be better to describe the positions of the check bits when referring to Hamming's original description, which I expect are 1, 2, 4, 8, 16, ...
- However, I found the whole article confusing. The explanation I was given is effectively (in my own words) "Place check bits at bit positions (with 1 being the least significant bit) 1, 2, 4, 8, 16, etc. Initially, set the check bits to 0. Then XOR together the indices of all bits that are set (counting from 1), and place the resulting value in the check bits. When receiving, repeat the XORing operation. If the result is 0, there we no detectable errors. Otherwise, if the error was a 1-bit error, the result is the index of the bit that was corrupted. If more bits were corrupted, there is no way to determine which to correct." The matrix representation just generalises this slightly. I'm not sure how/whether to add that to the article, though. Obviously there would need to be references and examples. --StuartBrady 15:00, 18 February 2006 (UTC)
-
- I don't find the whole article confusing, just a little bare. Your explanation of Hamming codes here is basically the same as that in the (11,7) example. But it could evidently be improved given that you find it confusing.
-
- My problem is with the (7,4) example. The author first states that the modern usage of "Hamming code" is restricted to the (7,4) code given in the 1950 paper. But then he immediately provides an example of a different code, along with "Hamming matrices" which aren't in the paper either.
-
- Actually, Hamming illustrates his codes using the (7,4) code, but describes codes of arbitrary sizes as well. He also relates his work in the theory of error-correcting codes, most notably his concept of (Hamming) distance between code words. I think his (7,4) code was actually used at Bell Labs, but I could use some confirmation on that. In his paper, he also defines a version with have an extra parity bit, which allows for detection of 2-bit errors besides correction of 1-bit errors. (So maybe teletext is justly called a Hamming code.)
-
- In the course I'm taking right now, we learned about "linear codes" of which Hamming codes and cyclic codes are special cases. There appears to be a very short and cryptic page on these already. Maybe we should reorganise the material among these two pages. The historical Hamming codes could belong on this page, and the matrix theory could go on the other.
-
- In any case, the (7,4) example needs revamping at least, if not removal. The (11,4) example could use a better explanation, and maybe be modified to be of (7,4) size to match the historical description. Any other opinions? Scottcraig 21:19, 18 February 2006 (UTC)
-
-
- I'm concerned that you don't consider it confusing — note that I didn't say "totally unreadable". Sure, my explanation is basically the same as the (11,7) example, but the example has little explanation. I think the author of the (7,4) example is probably wrong to state the modern usage of the "Hamming code" is retricted to the (7,4) code, but I'm not sure. Using the version in Hamming's paper can't hurt. I didn't realise that the extra parity bit was in Hamming's paper — thanks for pointing that out. Given that it took me 20 or so minutes to verify a correction to a simple mistake in the matrix, perhaps the explanation could be improved. --StuartBrady 03:29, 19 February 2006 (UTC)
-
[edit] Note on difference between (7,4) and (11,7) descriptions
I've noted in the article that the used matrices use a different order for data and parity bits. This at least should clear some misunderstanding.
Blaisorblade 15:53, 14 June 2006 (UTC)
In the example using 11,7 code, the table shows a parity bit calculation for the entire word including the error-flipped bit. The bit is shown as a 1 but the parity of the word itself is a 0. Am I missing something? [removed my IP address] kendallcp 13:10, 1 October 2006 (UTC)
- Well spotted. I've removed it. --StuartBrady (Talk) 14:10, 1 October 2006 (UTC)
[edit] New images
I created a slew of SVGs for the (7,4) example. Please check my work and let me know if I messed something up as I'll be happy to fix it (or fix it yourself if you've got an SVG editor (e.g., Inkscape)). Cburnett 21:23, 31 December 2006 (UTC)
[edit] (7,4) split
I split the (7,4) stuff to Hamming(7,4). Cburnett 03:13, 2 January 2007 (UTC)
- I rewrote the entire article (Hamming(7,4)) to follow the general Hamming code generation. Please review to make sure I got everything right. Cburnett 06:23, 2 January 2007 (UTC)
[edit] Other common codes
Are there any other common codes besides (7,4) and (11,7)? Cburnett 07:41, 2 January 2007 (UTC)
yes there are a family of codes
7,4
15,11
31,26
63,57
...
2N - 1, 2N - (N+1)
And each code can be appended with the parity bit to make the minimum distance 4
- I appreciate the response but I asked for common ones. :) Are all the ones you listed common in use? Cburnett 13:11, 5 April 2007 (UTC)
-
- I don't know which implementations are the most common, but the family given above is used because the total number of bits is close to powers of two, which are often required for I/O operations (32bit words, for instance). Within the constraints of I/O, you would want to use the largest code possible, since that will have the highest information rate (approaching one as N goes to infinity). Quantum7 04:55, 12 June 2007 (UTC)
[edit] (Pretty sure this should say four here instead of three, right?)
Hi 66.90.203.197, you're right that the text wasn't quite clear. Indeed if at most 3 errors are made, you will always see that an error was made. But there is an qualitative difference between them. For the (4,1) repetition code we have the following situations:
- At most 1 error is made. Any error is detected and corrected correctly.
- At most 2 errors are made. Any error is detected but may not be corrected
- At most 3 errors are made. Any error is detected but may be corrected to the wrong code word.
- At most 4 errors are made. Some errors will not be detected any more.
So when you flip 3 bits that is the first time you will start to receive messages that after decoding turn out to be wrong, while you can't see this.
For this reason, the code really is only useful if you have only 1 error. With 3 errors it is completely useless. —The preceding unsigned comment was added by Sander123 (talk • contribs) 08:38, 12 March 2007 (UTC).
The Hamming code has a minimum distance of 3, which means detect/correct 1 error. Add ing the extra partiy bit, extends the minimum distance to 4, which means correct/detect 1 error, detect 2 errors, however cannot correct for those 2 errors.
So to try to analyse what happens when you introduce 3 or more errors, does not make sense.
In general for any block-linear code, of which the Hamming Code is one, the error correction properties of the code for dealing with random errrors can be summarised by the minimum distance of the code.
The minimum distance is defined as the minimum hamming distance between two codewords (after the partity bits have been added). This calculation is performed across all codewords. I am sure there is a mathematical proof.
Anyway I have changed the text in that section.
[edit] Hi sorry I'd like to ask a question
When you work out your Hamming Code, why do you add parity bits on the left? If the first Parity bit, P1, is supposed to be on the first bit, should it not be on the right? Hence, with your (11,7) example, where you have _ _ 0 _ 1 1 0 _ 1 0 1, shouldn't you have 0 1 1 _ 0 1 0 _ 1 _ _? Giving you a Hamming code of... 0 1 1 0 0 1 0 1 1 1 0, using Even parity.
That's what I read in my lecture notes, anyway. And I spot alot of different "versions" on the internet, so I'd like to just seek your opinion. Thanks LordDazzer 10:08, 4 April 2007 (UTC)
- Bit order is irrelevant so long as it is understood. :) It's an age-old argument of endianness. Cburnett 12:51, 4 April 2007 (UTC)
The placement of parity bits is important to exploit the properties of the code.
The placement of parity and data in the transmitted sequence should be
p0 p1 p2 t3 p4 t5 t6 t7 p8 t9 t10 t11 t12 t13 t14 t15 p16 t17 t18 etc
Notice I label the parity bits P0, p1, p2, p4, p8, etc, the suffix represents in the transmission stream where
the parity bits should be located. And t3=d0, t5=d1, t6=d2, t7=d3, t9=d4, t10=d5 etc is how the input sequence
is mapped.
On reception when you calculate the parity bits,
If all p0-pn are zero, no errors occured
If p0=0, and p1-pn non zero, the decimal value of p1-pn points to the single error
If p0=1, then a double error occured
pn is calculated using the skip n-1, then (check n, skip n) procedure described in the main text
p0 is the even parity across the whole transmitted sequence
[edit] Hamming (8,4) parity check matrix wrong
Im no expert, but i think the table is missing a column. Other sources seem to agree and not to mention its supposed to be a code of length 8! --Ray andrew (talk) 16:04, 13 December 2007 (UTC)
- You're right. I've fixed this now. At some point, I'm going to convert these to the more standard systematic form. Oli Filth(talk) 17:51, 13 December 2007 (UTC)
-
- That used in (7,4) is systemic. (8,4), as shown in the diagrams, is identical to (7,4) with a parity bit covering the whole thing. The matrices weren't of the correct dimensions but as they were a minute ago then were 100% inconsistent with the diagrams. I have restored my previous version with dimension fixed. Please show some math if I've got it wrong. Cburnett (talk) 02:20, 8 March 2008 (UTC)
-
-
- You are correct; whilst my correction (13 Dec 07) fixed the dimensionality, I didn't notice that it didn't match the diagram.
-
-
-
- You are also correct that the codes are both systematic, in that all the data bits appear in the codewords. By systematic, I guess I meant the more aesthetically pleasing arrangement such that the data bits form the first (or last) k bits of the codeword (typical of e.g. cyclic codes), such that the generator matrix may be expressed in the form
- Oli Filth(talk) 12:01, 8 March 2008 (UTC)
- You are also correct that the codes are both systematic, in that all the data bits appear in the codewords. By systematic, I guess I meant the more aesthetically pleasing arrangement such that the data bits form the first (or last) k bits of the codeword (typical of e.g. cyclic codes), such that the generator matrix may be expressed in the form
-