AKS primality test

From Wikipedia, the free encyclopedia

The AKS primality test (also known as Agrawal-Kayal-Saxena primality test and cyclotomic AKS test) is a deterministic primality-proving algorithm created and published by three Indian Institute of Technology Kanpur scientists, Manindra Agrawal, Neeraj Kayal, and Nitin Saxena on August 6, 2002 in a paper titled PRIMES is in P.[1] The authors received the 2006 Gödel Prize for this work.

The algorithm, which was soon improved by others, determines whether a number is prime or composite and runs in polynomial time.

Contents

[edit] Importance

The key significance of AKS is that it was the first published primality-proving algorithm to be simultaneously polynomial, deterministic, and unconditional. That is, the maximum running time of the algorithm can be expressed as a polynomial over the number of digits in the target number; it guarantees to distinguish whether the target number is prime or composite (rather than returning a probabilistic result); and its correctness is not conditional on the correctness of any subsidiary unproven hypothesis (such as the Riemann hypothesis).

[edit] Basis of the test

The AKS primality test is based upon the equivalence

(x - a)^{n} \equiv (x^{n} - a) \pmod{n}

which is true if and only if n is prime. This is a generalization of Fermat's little theorem extended to polynomials and can be easily proven using the binomial theorem together with the fact that :{n \choose k} \equiv 0 \pmod{n} for all 0 < k < n if n is prime. While this inequality constitutes a primality test in itself, verifying it takes exponential time. Therefore AKS makes use of a related equivalence

(x - a)^{n} \equiv (x^{n} - a) \pmod{n, x^{r} - 1}

which can be checked in polynomial time. However while all primes satisfy this equivalence some composites do as well. The proof of correctness for AKS consists of showing that there exists a suitably small r and suitably small set of integers A such that if the equivalence holds for all such a in A then n must be prime.

The algorithm to test the primality of some integer n consists of two parts. The first step revolves around finding a suitable prime r = kq + 1, such that:

  • P(r − 1) = q where P(x) is the greatest prime factor of x,
  • q \ge 4 \sqrt{r} \log_{2}(n),
  • n^k \not\equiv 1 \pmod{r}.

During this step, it is important to confirm that n is not divisible by any prime p \le r; if it is divisible, the algorithm can terminate immediately to report that n is composite.

In the second step, a number of tests are done in the finite field GF(nr), in each case testing the equivalence of two polynomials within the field: if

(x - a)^{n} \equiv (x^{n} - a) \pmod{n, x^{r} - 1}

for all positive integers a with

a \le 2 \sqrt{r} \log_{2}(n),

then n is guaranteed prime. In all other cases it is composite.

As with any such algorithm, the paper concerned itself with establishing two facts: proving that the algorithm was correct, and establishing its asymptotic time complexity. This was achieved by proving two key facts, first by showing that an appropriate r can always be found and establishing an upper bound on its magnitude, and second by showing that the multiple equalities tested in the second part of the algorithm are sufficient to guarantee whether n is prime or composite.

Since the running time of both parts of the algorithm is entirely dependent on the magnitude of r, proving an upper bound on r was sufficient to show that the asymptotic time complexity of the algorithm is O(log12 + ε(n)), where ε is a small number. In other words, the algorithm takes less time than a constant times the twelfth (plus ε) power of the number of digits in n.

However the upper bound proven in the paper is quite loose; indeed, a widely held conjecture about the distribution of the Sophie Germain primes would, if true, immediately cut the worst case down to O(log6 + ε(n)).

In the following months after the discovery new variants appeared (Lenstra 2002, Pomerance 2002, Berrizbeitia 2003, Cheng 2003, Bernstein 2003a/b, Lenstra and Pomerance 2003) which improved AKS' speed by orders of magnitude. Due to the existence of the many variants, Crandall and Papadopoulos refer to the "AKS-class" of algorithms in their scientific paper "On the implementation of AKS-class primality tests" published in March 2003.

[edit] AKS Updated

In response to some of these variants and other feedback the paper "PRIMES is in P" was republished with a new formulation of the AKS algorithm and its proof of correctness. While the basic idea remained the same, r was chosen in a new manner and the proof of correctness was more coherently organized. While the previous proof relied on many different methods the new version relied almost exclusively on the behavior of cyclotomic polynomials over finite fields.

Again the AKS algorithm consists of two parts, and the first step is to find a suitable r; however, in the new version r is the smallest number such that: or(n) > log2n,

In the second step the equivalence is again tested

(x - a)^{n} \equiv (x^{n} - a) \pmod{n, x^{r} - 1}

this time for all positive integers less than \sqrt{\phi(r)} \log_{2}(n) where φ(r) is Euler's totient function of r

These changes improved the flow of the proof of correctness. It also allowed for an improved bound on the time complexity which is now O(log10.5n).

Lenstra and Pomerance show[2] how to choose polynomials in the test such that a time bound of Õ(log6n) is achieved.

[edit] References

  1. ^ Manindra Agrawal, Neeraj Kayal, Nitin Saxena, "PRIMES is in P", Annals of Mathematics 160 (2004), no. 2, pp. 781–793.
  2. ^ H. W. Lenstra, Jr. and Carl Pomerance, "Primality Testing with Gaussian Periods", preliminary version July 20, 2005.

[edit] External links