Littlewood-Offord problem

From Wikipedia, the free encyclopedia

In mathematics, the Littlewood-Offord problem is the combinatorial question in geometry of describing usefully the distribution of the subsums made out of vectors

v1, v2, ..., vn,

taken from a given Euclidean space of fixed dimension d ≥ 1.

The first result on this was given in a paper from 1938 by John Edensor Littlewood and A. Cyril Offord, on random polynomials. This Littlewood-Offord lemma applied to complex numbers zi, of absolute value at least one. The question posed is about how many of the 2n sums formed by adding up over some subset of {1, 2, ..., n} such that any two differ by at most 1 from each other.

Erdös in 1945 found a sharper inequality, based on Sperner's theorem, for the case of real numbers vi > 1. Then

{n \choose \lfloor{n/2}\rfloor}.

is an upper bound for the maximum number of sums formed from the vi that differ by less than 1. (It is easy to extend this to |vi| > 1.) This is best possible for real numbers; equality is obtained when all | vi | are equal.

Then Kleitman in 1966 showed that the same bound held for complex numbers. He extended this (1970) to vi in a normed space.

The semi-sum

m = ½Σ vi

can be subtracted from all the subsums. That is, by change of origin and then scaling by a factor of 2, we may as well consider sums

Σ εivi

in which εi takes the value 1 or −1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors, and what can be said knowing nothing more about the vi.

[edit] References