Talk:Prime number theorem

From Wikipedia, the free encyclopedia

Contents

[edit] Miscellaneous

Not sure how to change that myself but the mathematical notation here is wrong: http://en.wikipedia.org/math/a2184dbe1f8958ef9bf635237ddf9573.png It should not be the approx symbol (2 waves) but a symbol with 1 wave. Strictly speaking the approy symbol is wrong, since Pi(x) is _not_ approximated x/ln(x), but the only the limes of Pi(x)*ln(x)/x is 1, which is a different statement. see also : http://mathworld.wolfram.com/PrimeNumberTheorem.html


Could someone put a proof of the Prime Number Theorem here?

Paul Erdos, the legendary genuius, was the first to provide an 'elementary' proof of the prime number theorem. Should this be added?

I don't think so. Any non-elementary proof requires considerable background and machinery from complex analysis, and the proof runs several pages. The "elementary" proof is even more difficult to follow, requires lots of preliminary estimates and results, and the proof is several pages long.

"The constant involved in the O-notation is unknown." Which constant?

I assume this talks about the inherent constant factor that comes with O-notation. The line about Erdös (not Erdos!) should probably be added, but without the "the legendary genius" part. -- Schnee 12:22, 5 Sep 2003 (UTC)

I'd like to see added: methods for calculating pi(x), other than brute force. Bubba73 05:25, 18 Jun 2005 (UTC)


"The Selberg-Erdős work effectively put paid to the whole concept" - is "paid" a wrong word? Bubba73 05:28, 18 Jun 2005 (UTC)

[edit] Correct TeX

Do not write log\ x. That looks like this:

log\ x.

Instead write \log x. That looks like this:

\log x.\,

Use of backslashes in \log, \exp, \sin, \cos, \max, \min, \det, \lim, \sup, \inf and a number of other math operators has at least three effects:

  • log and the like don't get italicized as if they were variables;
  • proper spacing is automatic. How much space is needed depends on the context, but all of the standard conventions in this regard are built in to the TeX software;
  • in some cases, e.g., \lim, \max, etc., the supscript will appear directly below the operator, rather than below and to the right, thus:
\max_{x>0} f(x).\,
\lim_{x\to 0} f(x).\,

Michael Hardy 23:02, 21 Jun 2005 (UTC)

Thanks, I'm the one that used log\. (I started with TeX less than a week ago.) Bubba73 23:51, 21 Jun 2005 (UTC)

[edit] logarithmic integral

There ae some subtle problems for the logarithmic integral in this article: first, the asymptotic expansion is for the reglar logarithmic integral function and not for the offset logarithmic integral. To be consistent with the notation in other articles in WP, and with Abramowitz & Stegun, we need to change Li to li throughout this article. Sooo... instead of being bold, I thought I'd ask here ... who put Li in there? is there a well-known number theory book using this notation? linas 21:24, 21 December 2005 (UTC)

p.s. in case there's confusion about the asymptotic expansion, one has li(x)=Ei(ln x), with Ei the exponential integral, and a detailed derivation of the asymptotic expansion for li(x) is given in the article on aymptotic expansions. However, both this article and the Riemann hypothesis is using Li(x). linas 21:31, 21 December 2005 (UTC)
Linas, I don't understand you. Why do you want to change Li to li? I think li = logarithmic integral = int_0^x ln t dt and Li = offset logarithmic integral = int_2^x ln t dt; the article uses Li because it is about the offset logarithmic integral. I'm writing from my parents so I can't really check anything. Then, which asymptotic expansion are you talking about? The expansions for the regular logarithmic integrals and the offset logarithmic integrals are the same as they differ only by a constant term, which is smaller than the terms in the expansion. Am I making any sense? -- Jitse Niesen (talk) 21:51, 21 December 2005 (UTC)
Yes, well Li(x) ≈ li(x)-1.05... so by using big-O notation, one can drop this constant, and at this level, the asymptotic expansions are "the same". But if you work the details, the quoted asymp expansion is really for li(x) not Li(x), and if taken literally, will give you accurate estimates that are off by 1.05... however, maybe this isn't important. linas 22:59, 21 December 2005 (UTC)
"the quoted asymp expansion is really for li(x) not Li(x)" — you seem to be using some intuition here that I do not have.
No, not at all; I merely went through the actual derivation of the asymp expansion the other day (for an unrelated reason), and in the derivation, its li not Li that comes out. (Its half-a-page of paper total, the bulk of the work is already done in the example in asymptotic expansion).
"accurate estimates that are off by 1.05..." — did you check this? Where do you truncate the expansion?
I'm merely noting that li(2)=1.05... no I did not check the asymp expansion numerically. The "rule of thumb" for truncating asymp. expansions to get the best accuracy is to cut them off at the smallest term. But you are right, after such a cut, the result could be off by 10 or 100 so arguing about 1.05 is silly, so my bad. Although, I'm now motivated to actually plug in a few numbers ...
If you want, you can move the expansion a bit up and say it is the expansion for li. The whole article could use a check; for instance, it now seems that the table uses li(x) instead of Li(x). -- Jitse Niesen (talk) 14:29, 22 December 2005 (UTC)
Maybe. If I do anything here, it'll probably be minimalist. I'm trying to work on something else. linas 16:21, 22 December 2005 (UTC)

[edit] please excuse brain dump

Please someone make sense of the following brain dump and work it into the article.

I haven't studied this stuff for years now, but from what I recall, Riemann's prime counting function J(x) (see the article on prime counting function) is in some sense a much more natural candidate than π(x) to be approximated by Li(x). I can't remember exactly why; I think it has to do with the fact that J(x) is closely connected with ψ(x) which in turn is the summatory function of the von Mangoldt function Λ(n) which has very nice convolution properties.

J(x) can be defined (with caveats -- see prime counting function) by

J(x) = \sum_{n \geq 1} \frac1n \pi(x^{1/n}).

Then by mobius inversion you get

\pi(x) = \sum_{n \geq 1} \frac{\mu(n)}n J(x^{1/n}).

(There are only finitely many terms in this sum.) So if J(x) is approximated well by Li(x) then you would expect π(x) to be approximated well by

R(x) = \sum_{n \geq 1} \frac{\mu(n)}n \operatorname{Li}(x^{1/n}),

whose leading term is just Li(x). There are issues of convergence to worry about here; I haven't thought through it very hard. Also I'm not sure whether you should be using Li or li. Also I'm not sure if R(x) is standard notation for this. And I don't have any references.

Anyway, if you shove this through maple or something, you get the following values of R(x) for x being successive powers of ten, truncated using the "floor" function:

4, 25, 168, 1226, 9587, 78527, 664667, 5761551, 50847455, 455050683, 4118052494, 37607910542, 346065531065, 3204941731601, 29844570495886, 279238341360977, 2623557157055977, 24739954284239494, 234057667300228940, 2220819602556027015, 21127269485932299723, 201467286689188773625, 1925320391607837268776...

I'm not an expert at numerical evaluation, so I wouldn't be surprised if these were off by one somewhere. I tried with varying number of terms and lots of digits of accuracy to check that the answers were stable.

If you compare this to the table in the article, you'll see it's much more impressive than the other estimates, especially for small x. (The difference even changes sign.) However, I believe I read somewhere that it has been proved that in the long run it's asymptotically no better than just using Li(x), I'm guessing because there are indeed plenty of zeroes of ζ with real part 1/2, and the contributions from these are at least as big as the "correction" terms μ(n)Li(x1 / n) / n for n at least 2. Dmharvey 02:01, 25 December 2005 (UTC)

[edit] Elementary Stuff

This article is fairly complex, as it should be given the nature of the topic. I've been thinking about this problem on a very basic level.

According to Gauss, every number can be expressed as the product of primes. For example, 14 is (2x7) both of which are primes.

Therefore prime numbers are the building blocks for other numbers.

An easy way to understand why prime numbers decrease in number as the number grows follows.

Take a few prime numbers, say 2,3,5, and 7. With just these numbers you can create the numbers 2x3=6 .. 2x5=10 ..2x2x3 = 12 2x7 = 14 .. 2x2x3 = 12 etc

It becomes obvious that a new prime number will occur whenever a combination of old primes is not possible leaving a gap in the set of all numbers.

If we want to know how many primes there are, we might think in reverse. How many combinations of old primes are there? As the number of these combinations increases, they fill up more and more of the number set. This explains why the number of primes is asymptotic as x grows larger.

Going back to the original question: how many prime numbers are there?

Once again look at a few simple primes say, 2,3 and 5.

The number 2 can be used over and over again to create new numbers.

2+2 = 4 .. 2+2+2 = 6 .. 2+2+2+2 = 8

(2x2 = 4) (2x3=6) (2x4=8)

All of these numbers can be expressed using factors and multiplication. They are all non-prime numbers because they are made up of more than a prime and one.

By looking at the number 2 we can see that at least 1/2 of all numbers are non prime.

Then if we examine 3.. we can see that it's responsible for 1/3 of all number being non-prime.

However, we can't simply add 1/2 and 1/3 to increase our total of non-prime numbers, because some of them overlap (for example 2x3=6 which is 2+2+2 or 3+3).

This sort of examination is similar to the sieve method. - BringCocaColaBack

I'm not sure what you are trying to accomplish here. Euclid already knew of (some very close to) the fundamental theorem of arithmetic and the infinitude of primes. Gauss certainly wasn't the first one to prove that.Dragonfall 04:25, 3 March 2007 (UTC)

[edit] graph

I noticed the graph on the Prime Number Theorem page is wrong, and I don't know who to contact so I am posting this here: the blue line should be Li(x) and the red one pi(x), and not the other way around, as for small values of x, Li(x)>pi(x). Thanks

I'm not sure, but it might have something to do with some disagreement that's been going on about the correct definition of Li(x), and whether it's really the definition of li(x), and which one should be used in the article. (They differ by some small constant.) Dmharvey 19:50, 15 April 2006 (UTC)

Yes, they differ by 1.045 but that is negligible. I think the graph is just wrongly marked...

Am I missing something? The graph is not "marked" ! There is no indication whatsoever of which color line represents which function. Is there some common knowledge that everyone but me holds which would allow them to know at a glance without a label which curve is which?
Also the "thumbnail" version is impossible to see. When I clicked on it to get the larger version I lost all information about it. Could someone edit the information on the large image to say what it is?
Nwbeeson 21:16, 30 January 2007 (UTC)
When I click on the graph, I get a large image with colors explained below. The small image seems a little useless. Maybe it should say "Click for large image and explanation" or something like that. PrimeHunter 00:18, 31 January 2007 (UTC)

[edit] Dusart

The 1999 Dusart paper gives stronger asymptotic bounds than that given.

Article: p_k\ge k(\ln k+\ln\ln k-1),k\ge2

On p. 414 §4: p_k\ge k\left(\ln k+\ln\ln k-1+\frac{\ln\ln k-2.25}{\ln k}\right),k\ge2

The article's result is correct, but the latter result is stronger for k > 13197. I'm choosing to leave this off because the extra complexity isn't needed here, but if someone feels it's worthwhile they can add it.

At the moment I'm trying to read his thesis Autour de la fonction qui compte le nombre de nombres premiers to see if there are any other results along these lines. My French isn't very good, but I'll see what I can find. Pages 30–32 cover it, so I just have a bit to wade through. CRGreathouse 11:50, 25 May 2006 (UTC)


[edit] Table of π(x), x / ln x, and Li(x)

The first sentence in the section is, "Here is a table that shows how the three functions π(x), x / ln x and Li(x) compare:" but the table does not have any column labeled "x/ln(x)" nor "Li(x)".

Am I expected to solve the equations at the top of the columns to determine which column is "x/ln(x)" and which is "Li(x)" ?!?

OR is there a mistake in that sentence?

Nwbeeson 21:20, 30 January 2007 (UTC)

x/ln(x) and Li(x) are not shown. The interesting thing is their difference to pi(x) so only that is shown. I modified the explanation. PrimeHunter 00:18, 31 January 2007 (UTC)