User talk:EmilJ
From Wikipedia, the free encyclopedia
Archives |
1 |
[edit] Thanks for the tip
And for cleaning up the hyphens, dashes, and minus signs in the quadratic residue article (I'd swear that somewhere I read that – was proper for formulas; I didn't even realize that − existed)
I can see the difference between – and − by juxtaposing them; –− the minus is raised a pixel or so higher. Subtle.
Thanks again.
Virginia-American (talk) 17:10, 1 April 2008 (UTC)
[edit] Probability of Solovary-Strassen primality test
The probability of the Solovay-Strassen primality test can be more specifically limited than 1/2. I will find the reference in the textbook that a math teacher of mine owns. Meanwhile, I have my own version (the math teacher made me re-calculate everything). It involves the use of Bayes' rule.
"To derive the probability of failure of the Solovay-Strassen Primality test, Bayes' rule is used. In this problem, event A will be the probability that n survives m number of trials and event B will be the probability that n is composite. So, the probability that a number n is composite given that it survives m number of trials is what we are looking for. P(A|B) (the probability that a composite number n survives m number of trials) is less than or equal to 2^{-m} because at most half of the numbers can be liars. P(A) is expanded and found below. P(B) is the probability of choosing a composite number. The probability of choosing a prime number is about however, to increase our chances, we will discard the obviously non-prime, a.k.a. \textbf{even} numbers, doubling our chance to Thus the probability of choosing a composite number is since if n is prime the number of trials it can survive is infinite.
So
Does my probability still seem dubious?
--Heero Kirashami (talk) 22:44, 27 April 2008 (UTC)
- You are seriously confused about what a probabilistic algorithm is, and how is its probability of failure defined. The statement "the probability of failure of the algorithm A is at most p" means "for every input n, the probability (over its internal coin tosses) that A fails to give the correct answer on input n is at most p". It does not involve any probability distribution on the inputs in any way. (One of the reasons being to rule out trivial "algorithms" like ignore the input and return "composite", which according to your definition correctly computes primality with the negligible error of 1/ln n.) So, while your computation might be correct (I did not check it), it is totally misguided, the number computed has nothing to do with the error probability of the algorithm.
- Furthermore, the bound as you inserted in the article uses a mysterious parameter m which does not appear anywhere in the article, and the bound is actually worse than the usual 1/2 (or 2-m with more trials and your notation) bound, as assymptotically for . For future, note also the correct formating of ln in TeX. — EJ (talk) 11:39, 28 April 2008 (UTC)
-
- However, note that the original calculations of an author much smarter and less confused than an eighth grader were correct. And he actually knew how to define the actual terms. I'm still somewhat confused about the terms, but I know how to do the calculations. If I can reference the book, then I will place it in the article. --Heero Kirashami (talk) 22:26, 28 April 2008 (UTC)
-
-
- You see, the problem is not so much with the computation, but with your interpretation of the result. What you thus need to check is not the name of the book, but the actual formulation of the statement, and its meaning in the context. Given your own admission of being "somewhat confused about the terms", I will not hesitate to revert any addition which is at odds with standard and well-known facts abut the algorithm. — EJ (talk) 11:03, 30 April 2008 (UTC)
-
-
-
-
- However, I have read through it carefully and so I have found the source. If you would like to check it, fine. However, even additions that are "at odds with standard and well-known facts ab[o]ut the algorithm." may be correct. As Cryptography: Theory and Practice, by Douglas R. Stinson, states, (I'm quoting this directly but re-phrasing will be necessary for the article) "If we have run the algorithm m times, what is our confidence that n is prime? It is tempting to conclude that the probability that such an integer n is prime is 1 − 2 − m. This conclusion is often stated in both textbooks and technical articles, but it cannot be inferred from the given data." Thus, I do believe it contradicts commonly referenced facts about the distribution. However, just because everyone else assumed the world was flat didn't mean it actually was.
-
-
-
-
-
- Thus, the computation is correct (and, given that m will be defined as the number of trials, completely sensible; further, it is not n, but m which is going to infinity, so it is still approaching zero, and further it does matter whether 2 − m is better, it matters which one is correct); however, his formulation of the statement was essentially the same as mine. Thus, I think that my result is correct. I am willing to reformat it so it matches with the article. However, I will wait for your "approval." If you need proof, then go find a copy of Cryptography: Theory and Practice, Second Edition, by Douglas R. Stinson, (If you go to Books_on_cryptography you can find its ISBN, but you can probably find a friend or someone with it, or try a library), and go to page 178 (Or "The RSA Cryptosystem and Factoring Integers"). --Heero Kirashami (talk) 00:01, 7 May 2008 (UTC)
-
-
-
-
-
-
- Probabilistic algorithms and their probability of error are mathematical objects with an exact definition, which you can find in any standard textbook on computational complexity (e.g., Papadimitriou). It has nothing to do with anybody's confidence in anything. Confidence is a subject of psychology, not computer science. It is a proved mathematical fact that the probability of failure of the Solovay–Strassen algorithm is at most for every n. It is likewise a hard mathematical fact that this bound is optimal, there are many inputs n which attain the bound: 1729, 2465, 15841, ... (in general, any Carmichael number n such that 2(p − 1) | n − 1 for every prime divisor p of n). You cannot argue with it any more than you can argue with 1 + 1 = 2, so what you say about flat world is just babbling nonsense. Finally, it makes no sense to say that "it is not n, but m which is going to infinity". In theory, it is customary to compare randomized algorithms using just one round (i.e., m = 1). In practical applications, both parameters are of course bounded, but m behaves as a constant much more than n does (m is usually a small number like 5 to 10, whereas n has hundreds of bits).
-
-
-
-
-
-
-
- Your derivation above makes it clear that you are not computing a bound on the failure probability of the algorithm, but the conditional probability of pronouncing prime a uniformly randomly chosen composite integer in [1, n]. Actually, that's still not quite correct, as you introduced for no good reason another complication by excluding even numbers. (Why? Why not exclude also multiples of 3? Or 5? etc?) So, the actual description is that it is supposed to be a bound on the probability of pronouncing prime a uniformly randomly chosen odd composite integer in [1, n].
-
-
-
-
-
-
-
- Should it be mentioned in the article? No, I say. Neither the fact that it appeared as an example in some book, nor the fact that you were able to recompute it yourself, make it notable for inclusion in an encyclopedia. For one thing, there is no explanation why anyone should be interested in a parameter with such a ridiculously complicated description. Much more importantly, the appearance of the bound in the article would suggest the impression that it is a realistic estimate, which is completely false. The actual probability is much, much smaller. This is due to the fact that the number is equal to , where p is not the usual (maximal) probability of error of the algorithm, but the average probability of error taken over uniformly random (odd) integers in [1,n]. This p is significantly smaller than 1/2. For a trivial bound, it is less that (for m = 1), because is 6 / π2 on average (see totient#Growth of the function, or any textbook on number theory). For a stronger bound, Damgård, Landrock, and Pomerance give several bounds (e.g., for m = O(log n), for larger m) on the average probability of error of the closely related Miller–Rabin algorithm (see that article for an exact reference). (The point to observe in the somewhat complicated expression is that the bound is exponentially small not only in m, but also in log n.) I am not aware of such a bound being published for Solovay–Strassen (presumably because nobody gives a damn about the Solovay–Strassen algorithm any more, as Miller–Rabin is better in all respects), but the similarity of the two algorithms and their analysis strongly suggests that a bound of similar growth rate should hold for Solovay–Strassen as well.
-
-
-
-
-
-
-
- So, apart from your bound being hardly useful, it is also highly misleading, as it is badly suboptimal. I thus cannot agree with putting it in the article.
If you want to make yourself useful, you can search a library to find out whether there isn't a published paper extending the Damgård, Landrock, and Pomerance results to Solovay–Strassen after all (though it does not look very promising), instead of keeping pushing your bound.— EJ (talk) 10:45, 9 May 2008 (UTC)
- So, apart from your bound being hardly useful, it is also highly misleading, as it is badly suboptimal. I thus cannot agree with putting it in the article.
-
-
-
-
-
-
-
-
- Seeing as we have reached an agreement, you should put it in as I am still partly confused, and you can probably write it a lot better than I can. And it's definitely true, too, that no one gives a damn about Solovay-Strassen because Miller-Rabin is so much faster, with an equal or better probability of...working (that's the best I can say). And even though I have no idea what the heck your number is, I don't disagree because it's probably for the general case (I think) instead of just for odds and also I am not too good with algebraic manipulation when there's so many logs. It's great that we've both grown as mathematicians, then! Thanks! For me, it would probably take...15,000 years to look through papers! --Heero Kirashami (talk) 06:10, 13 May 2008 (UTC)
-
-
-
-
- ^ P. Erdős, C. Pomerance, On the number of false witnesses for a composite number, Mathematics of Computation 46 (1986), no. 173, pp. 259–279.
[edit] global account
I'm making an usurpation request for the account EmilJ on da. — Emil J (talk) 10:19, 4 June 2008 (UTC)