Fano's inequality
From Wikipedia, the free encyclopedia
In information theory, Fano's inequality (also known as the Fano converse) is a lower bound on the error probability of any decoder. It was developed by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook.
The lower bound is calculated as:
for any function (even probabilistic) g of the observation V used to estimate the message W out of a countable number of possibilities . The logarithm is in the same base as that used to calculate the entropy.
[edit] Proof
Let E be the error indicator,
Applying the chain rule on H(W,E | V) in two different ways gives,
Consider each terms separately,
-
- because E is binary.
-
- where
- applying the fact that entropy is non-negative and is not increased by conditioning.
- Similarly for the other term, .
- Hence,
- with equality if g is a deterministic function (i.e. H(E | W,V) = H(E | W,V,g(V)) = 0). g being deterministic, however, is not necessary for the proof.
Hence, applying the above inequalities (2-4) on the chain rule expansions (1) gives the Fano's inequality. Q.E.D.
[edit] References
- Robert M. Fano, Transmission of information; a statistical theory of communications. Cambridge, Mass., M.I.T. Press, 1961. ISBN 0-262-06001-9
- Robert M. Fano, Fano inequality Scholarpedia, 2008.