Expander walk sampling

From Wikipedia, the free encyclopedia

The expander walk sampling theorem, the earliest version of which is due to Ajtai-Komlós-Szemerédi and the more general version typically attributed to Gillman, states that sampling from an expander graph is almost as good as sampling independently.

[edit] Statement

Let G = (V,E) be an expander graph with normalized second-largest eigenvalue λ. Let n denote the number of vertices in G. Let f : V \rightarrow [0, 1] be a function on the vertices of G. Let μ = E[f] denote the true mean of f, i.e. \mu = \frac{1}{n} \sum_{v \in V} f(v). Then, if we let Y_0, Y_1, \ldots, Y_k denote the vertices encountered in a k-step random walk on G starting at a random vertex Y0, we have the following for all γ > 0:

\Pr[\frac{1}{k} \sum_{i=0}^k f(Y_i) - \mu > \gamma] \leq e^{-\Omega (\gamma^2 (1-\lambda) k)}.

Here the Ω hides an absolute constant \geq 1/10. An identical bound holds in the other direction:

\Pr[\frac{1}{k} \sum_{i=0}^k f(Y_i) - \mu < -\gamma] \leq e^{-\Omega (\gamma^2 (1-\lambda) k)}.

[edit] Uses

This lemma is useful in randomness reduction in the study of derandomization. Sampling from an expander walk is an example of a randomness-efficient sampler. Note that the number of bits used in sampling k independent samples from f is klogn, whereas if we sample from an infinite family of constant-degree expanders this costs only k + O(logn). Such families exist and are efficiently constructible, e.g. the Ramanujan graphs of Lubotzky-Phillips-Sarnak.

[edit] External links

  • Proofs of the expander walk sampling theorem. [1] [2]