Schnirelmann density

In additive number theory, the Schnirelmann density of a sequence of numbers is a way to measure how "dense" the sequence is. It is named after Russian mathematician L.G. Schnirelmann, who was the first to study it.[1][2]

Definition

The Schnirelmann density of a set of natural numbers A is defined as

\sigma A = \inf_n \frac{A(n)}{n},

where A(n) denotes the number of elements of A not exceeding n and inf is infimum.[3]

The Schnirelmann density is well-defined even if the limit of A(n)/n as n → ∞ fails to exist (see asymptotic density).

Properties

By definition, 0 A(n) n and n σA A(n) for all n, and therefore 0 σA 1, and σA = 1 if and only if A = N. Furthermore,

\sigma A=0 \Rightarrow \forall \epsilon>0\ \exists n\ A(n) < \epsilon n.

Sensitivity

The Schnirelmann density is sensitive to the first values of a set:

\forall k \ k \notin A \Rightarrow \sigma A \le 1-1/k.

In particular,

1 \notin A \Rightarrow \sigma A = 0

and

2 \notin A \Rightarrow \sigma A \le \frac{1}{2}.

Consequently, the Schnirelmann densities of the even numbers and the odd numbers, which one might expect to agree, are 0 and 1/2 respectively. Schnirelmann and Yuri Linnik exploited this sensitivity as we shall see.

Schnirelmann's theorems

If we set \mathfrak{G}^2 = \{k^2\}_{k=1}^{\infty}, then Lagrange's four-square theorem can be restated as  \sigma(\mathfrak{G}^2 \oplus \mathfrak{G}^2 \oplus \mathfrak{G}^2 \oplus \mathfrak{G}^2) = 1. (Here the symbol A\oplus B denotes the sumset of A\cup\{0\} and B\cup\{0\}.) It is clear that  \sigma \mathfrak{G}^2 = 0. In fact, we still have  \sigma(\mathfrak{G}^2 \oplus \mathfrak{G}^2) = 0, and one might ask at what point the sumset attains Schnirelmann density 1 and how does it increase. It actually is the case that  \sigma(\mathfrak{G}^2 \oplus \mathfrak{G}^2 \oplus \mathfrak{G}^2) = 5/6 and one sees that sumsetting \mathfrak{G}^2 once again yields a more populous set, namely all of \N. Schnirelmann further succeeded in developing these ideas into the following theorems, aiming towards Additive Number Theory, and proving them to be a novel resource (if not greatly powerful) to attack important problems, such as Waring's problem and Goldbach's conjecture.

Theorem. Let A and B be subsets of \N. Then

\sigma(A \oplus B) \ge \sigma A + \sigma B - \sigma A \cdot \sigma B.

Note that \sigma A + \sigma B - \sigma A \cdot \sigma B = 1 - (1 - \sigma A)(1 - \sigma B). Inductively, we have the following generalization.

Corollary. Let A_i \subseteq \N be a finite family of subsets of \N. Then

\sigma(\bigoplus_i A_i) \ge 1 - \prod_{i}(1 - \sigma A_i).

The theorem provides the first insights on how sumsets accumulate. It seems unfortunate that its conclusion stops short of showing \sigma being superadditive. Yet, Schnirelmann provided us with the following results, which sufficed for most of his purpose.

Theorem. Let A and B be subsets of \N. If \sigma A + \sigma B \ge 1, then

A \oplus B = \N.

Theorem. (Schnirelmann) Let A \subseteq \N. If \sigma A > 0 then there exists k such that

\bigoplus^k_{i=1} A=\N.

Additive bases

A subset A \subseteq \N with the property that A \oplus A \oplus \cdots \oplus A = \N for a finite sum, is called an additive basis, and the least number of summands required is called the degree (sometimes order) of the basis. Thus, the last theorem states that any set with positive Schnirelmann density is an additive basis. In this terminology, the set of squares \mathfrak{G}^2 = \{k^2\}_{k=1}^{\infty} is an additive basis of degree 4. (About an open problem for additive bases, see Erdős–Turán conjecture on additive bases.)

Mann's theorem

Historically the theorems above were pointers to the following result, at one time known as the \alpha + \beta hypothesis. It was used by Edmund Landau and was finally proved by Henry Mann in 1942.

Theorem. (Mann 1942) Let A and B be subsets of \N. In case that A \oplus B \ne \N, we still have

\sigma(A \oplus B) \ge \sigma A + \sigma B.

An analogue of this theorem for lower asymptotic density was obtained by Kneser.[4] At a later date, E. Artin and P. Scherk simplified the proof of Mann's theorem.[5]

Waring's problem

Main article: Waring's problem

Let  k and  N be natural numbers. Let  \mathfrak{G}^k = \{i^k\}_{i=1}^\infty. Define  r_N^k(n) to be the number of non-negative integral solutions to the equation

 x_1^k + x_2^k + \cdots + x_N^k = n\,

and  R_N^k(n) to be the number of non-negative integral solutions to the inequality

 0 \le x_1^k + x_2^k + \cdots + x_N^k \le n,\,

in the variables  x_i, respectively. Thus  R_N^k(n) = \sum_{i=0}^n r_N^k(i). We have

The volume of the N-dimensional body defined by  0 \le x_1^k + x_2^k + \cdots + x_N^k \le n, is bounded by the volume of the hypercube of size  n^{1/k}, hence R_N^k(n) = \sum_{i=0}^n r_N^k(i)= n^{N/k}. The hard part is to show that this bound still works on the average, i.e.,

Lemma. (Linnik) For all k \in \N there exists N \in \N and a constant c = c(k), depending only on k, such that for all n \in \N,

r_N^k(m) < cn^{\frac{N}{k}-1}

for all 0 \le m \le n.\,

With this at hand, the following theorem can be elegantly proved.

Theorem. For all k there exists N for which \sigma(N\mathfrak{G}^k) > 0.

We have thus established the general solution to Waring's Problem:

Corollary. (Hilbert 1909) For all k there exists N, depending only on k, such that every positive integer n can be expressed as the sum of at most N many k-th powers.

Schnirelmann's constant

In 1930 Schnirelmann used these ideas in conjunction with the Brun sieve to prove Schnirelmann's theorem,[1][2] that any natural number greater than one can be written as the sum of not more than C prime numbers, where C is an effectively computable constant:[6] Schnirelmann obtained C < 800000.[7] Schnirelmann's constant is the lowest number C with this property.[6]

Olivier Ramaré showed in (Ramaré 1995) that Schnirelmann's constant is at most 7,[6] improving the earlier upper bound of 19 obtained by Hans Riesel and R. C. Vaughan.

Schnirelmann's constant is at least 3; Goldbach's conjecture implies that this is the constant's actual value.[6]

Essential components

Khintchin proved that the sequence of squares, though of zero Schnirelmann density, when added to a sequence of Schnirelmann density between 0 and 1, increases the density:

\sigma(A+\mathfrak{G}^2)>\sigma(A)\text{ for }0<\sigma(A)<1.\,

This was soon simplified and extended by Erdős, who showed, that if A is any sequence with Schnirelmann density α and B is an additive basis of order k then

\sigma(A+B)\geq \alpha+ \frac{\alpha(1-\alpha)}{2k}\,,[8]

and this was improved by Plünnecke to

\sigma(A+B)\geq \alpha^{\frac{1}{1-k}}\ . [9]

Sequences with this property, of increasing density less than one by addition, were named essential components by Khintchin. Linnik showed that an essential component need not be an additive basis[10] as he constructed an essential component that has xo(1) elements less than x. More precisely, the sequence has

e^{(\log x)^c}\,

elements less than x for some c < 1. This was improved by E. Wirsing to

e^{\sqrt{\log x}\log\log x}.\,

For a while, it remained an open problem how many elements an essential component must have. Finally, Ruzsa determined that an essential component has at least (log x)c elements up to x, for some c > 1, and for every c > 1 there is an essential component which has at most (log x)c elements up to x.[11]

References

  1. 1.0 1.1 Schnirelmann, L.G. (1930). "On the additive properties of numbers", first published in "Proceedings of the Don Polytechnic Institute in Novocherkassk" (in Russian), vol XIV (1930), pp. 3-27, and reprinted in "Uspekhi Matematicheskikh Nauk" (in Russian), 1939, no. 6, 9–25.
  2. 2.0 2.1 Schnirelmann, L.G. (1933). First published as "Über additive Eigenschaften von Zahlen" in "Mathematische Annalen" (in German), vol 107 (1933), 649-690, and reprinted as "On the additive properties of numbers" in "Uspekhin. Matematicheskikh Nauk" (in Russian), 1940, no. 7, 7–46.
  3. Nathanson (1996) pp.191–192
  4. Nathanson (1990) p.397
  5. E. Artin and P. Scherk (1943) On the sums of two sets of integers, Ann. of Math 44, page=138-142.
  6. 6.0 6.1 6.2 6.3 Nathanson (1996) p.208
  7. Gelfond & Linnik (1966) p.136
  8. Ruzsa (2009) p.177
  9. Ruzsa (2009) p.179
  10. Linnik, Yu. V. (1942). "On Erdõs's theorem on the addition of numerical sequences". Mat. Sb. 10: 67–78. Zbl 0063.03574.
  11. Ruzsa (2009) p.184