Large sieve

From Wikipedia, the free encyclopedia

In mathematics, the large sieve is a method of analytic number theory. As the name implies, it was at least originally concerned, as a sieve method, with (for example) sifting from an integer sequence by means of congruence conditions modulo prime numbers, in which a relatively large number of residue classes for each modulus are excluded. That is, a large sieve, where a proportion of residue classes is sifted out, in principle is to be distinguished from a small sieve, in which perhaps only a single residue class for a given modulus is excluded from the sifted set. As is typical of sieve theory, this all will take place in a range of values for the parameters beyond the easy cases where the Chinese remainder theorem applies to give asymptotic estimates.

The early history of the large sieve traces back to work of Yu. B. Linnik, in 1941, working on the problem of the least quadratic non-residue. Subsequently Alfréd Rényi worked on it, using probability methods. It was only two decades later, after quite a number of contributions by others, that the large sieve was formulated in a way that was more definitive. This happened in the early 1960s, in independent work of Klaus Roth and Enrico Bombieri. The nature of the fundamental inequality was by then better understood: it relates to exponential sums evaluated at points on the unit circle that are in a sense well-spaced (measured by minimum distance), and the type of inequality is derived from the principle that the operator norm of a matrix of characters of the circle, evaluated at a finite set of points, is equal to the norm of the adjoint operator. In applications the set of points is often a Farey series of rational numbers, mapped onto the unit circle at roots of unity.

[edit] See also