Talk:Primality test
From Wikipedia, the free encyclopedia
"The Miller-Rabin primality test and Solovay-Strassen primality test are more sophisticated variants which detect all composites"
- If the methods detect all composites, why are they categorized as probabilistic rather than deterministic? Joestynes 08:31, 2 Mar 2005 (UTC)
-
- These two categories are unrelated. An algorithm is probabilistic if it makes random "coins tosses" during the computation, and produces the correct answer with high probability. An algorithm is approximative if it produces the correct answer only for (an overwhelming) majority of inputs. You can have a deterministic approximative test for primality (e.g., base-2 Fermat test), and you can have a probabilistic exact test (e.g., Rabin-Miller). The "probability" in "probabilistic test" refers to the internal coin tosses, not to some probability distribution on the inputs. -- EJ 13:49, 2 Mar 2005 (UTC)
-
-
- Maybe the headings should be changed to "Approximative" and "Deterministic", rather than "Probabilistic" and "Fast Deterministic", to avoid creating a false dichotomy. Joestynes 01:28, 3 Mar 2005 (UTC)
-
-
-
-
- This would be wrong. As I explained above, most of the probabilistic tests mentioned in the article are NOT approximative, and approximative vs. deterministic is not a dichotomy at all. -- EJ 12:24, 3 Mar 2005 (UTC)
-
-
[edit] Computer proof vs pen-and-paper proof
The article says the following:
- This is orders of magnitude less than the probability that the computation will be corrupted by hardware errors, software bugs, mistyping the input, some user's death due to heart attack during the computation, or an airplane crashing on your computer lab. All of these affect deterministic tests as well... If you want to know with certainty that a number is prime, you cannot use a computer at all; you have to find a mathematical proof of its primality, short and simple enough that you can write it down on a piece of paper and verify by hand that it is correct.
I disagree with this, or at least, I think it's expressing a POV. Surely a human prover can also make an error in his proof, and in his subsequent verification? I realise that this is a debatable (and philosophical) matter, but I think that the above, particularly the bold phrase, needs to be reworded. Personally, for various particular combinations of humans, computers and numbers, I would trust a computer proof of primality over a human proof ;-) — Matt Crypto 22:08, 22 May 2005 (UTC)
- I have removed the following section (between the rules). Even though it is useful information (pace NPOV concerns) end even though my queries above may have prompted its addition, it doesn't really belong on the primality testing page; it belongs on Randomized algorithm, Approximation algorithm and/or Deterministic algorithm. Primality tests are particular instances of these, so appropriate links and categorization should suffice. Of course, each listed method should state both whether it's deterministic or randomised and whether it's approximation or definite. I'm not qualified to do that. Joestynes 08:18, 3 Jun 2005 (UTC)
[edit] Popular misconceptions about randomized algorithms
- A randomized test is good for most numbers, but a small number of composites will be declared prime. — this is not the case. The test is only required to succeed with certain probability, but the probability is not taken over the inputs, but over additional auxiliary numbers used in the algorithm. The test must be correct with high probability for every input number n. Having said that, it may be also worthwhile to consider tests which are incorrect on a certain fraction of inputs; such algorithms are called heuristic or approximation algorithms. Notice that heuristic algorithms may be deterministic as well as randomized; it's a different kind of classification.
- A randomized test does not determine with certainty whether a number is prime or not. Deterministic tests are no better in this respect. If you make, say, 25 iterations of the Miller-Rabin tests, the algorithm as such is correct with probability smaller than 10−15. This is orders of magnitude less than the probability that the computation will be corrupted by hardware errors, software bugs, mistyping the input, some user's death due to heart attack during the computation, or an airplane crashing on your computer lab. All of these affect deterministic tests as well (in fact, for general numbers, the deterministic test will take far longer than a reasonable number of randomized test iterations, and hence be more exposed to hardware errors). If you want to know with certainty that a number is prime, you cannot use a computer at all; you have to find a mathematical proof of its primality, short and simple enough that you can write it down on a piece of paper and verify by hand that it is correct. Of course, your proof may employ algorithms as mathematical entities; but then again, deterministic and randomized tests are no different: if you prove that the Miller-Rabin test declares a number n as prime with probability at least 1/2, it is a valid proof that n is prime.
-
- In perfect world, appropriate links suffice. In reality, people do not follow links, especially if they have a false impression that the link explains something well-known or obvious. Moving the section to Randomized algorithm would be useless, as all the information is already there. EJ 12:14, 9 Jun 2005 (UTC)
-
-
- I think it wouldn't hurt to briefly summarize some of the concerns about probabilistic algorithms in this context, since it's where many people first encounter probabilistic algorithms - then we tell them to follow the link for more information. Deco 03:00, 21 June 2006 (UTC)
-