Talk:Graham's number
From Wikipedia, the free encyclopedia
[edit] ?
I don't understand. If the number is larger then the number of elementary particles in the universe, then why could it possibly matter. How/why was it used in proofs? If it's too large to even think about what's the point. What is the largest number that does "matter"? I define numbers that matter as numbers that can be used. (i.e. I have n dollars or There are n of those in the world.)--Ledlaxman24 01:45, 21 April 2006 (UTC)
- If you have 70 people standing in line to get into a movie or a concert that isn't too many. Yet there are over a googol (10100) ways in which those mere 70 people could end up being arranged in line. This is more than the number of particles in the visible universe, yet the situation I've described is easy to imagine. Or think about how many possible 40-move chess games there are. If you do some calculations (see Shannon number) you get around 10120. A computer can't calculate all those possibilites because, again, there isn't enough space in the entire space to store all those possibilities. Does that mean they don't exist? To me it means we have certain scenarios in which we have humongous numbers and we find it useful to develop some techniques to deal with them. Kaimiddleton 07:36, 22 April 2006 (UTC)
Thsnks, now it makes sense. Could you add that in somewhere on the page (preferably at the top). I'm in tenth grade geometry, and didn't understad any of the hypercube stuff. I think it would benefit everyone (especially non-mathmaticians) if you explained it like that.--Ledlaxman24 12:35, 22 April 2006 (UTC)
- Ok, I've reworked the article some to put it into perspective. You might want to check out the article on large numbers for more discussion. Kaimiddleton 04:02, 23 April 2006 (UTC)
[edit] Digits
I rather like the listing of the last ten digits; it makes the number seem more concrete somehow. Are there any similar tricks we could do to get the first 10 digits? Yours hopefully, Agentsoo 09:34, 29 July 2005 (UTC)
- Not by any method I know of, but then again, I don't know much. A method to calculate the first ten digits may exist, but if it did, it would certainly be a lot more complicated. Calculating the last digits of the number is quite simple (as the article says, elementary number theory) using modular arithmetic. The powers of the number can go higher and higher but the final digits will cycle, so it is possible to work out. Calculating the first digits would be much more difficult, if possible at all. -- Daverocks 10:12, 8 August 2005 (UTC)
[edit] Number of digits
How many digits does this large number have?? 66.32.246.115
- This large number has a large number of digits. ;) Okay, take the common logarithm of Graham's number, round it down to the nearest whole number, and then add 1. That's the number of digits. I'd write it out in full for you, but there aren't enough particles in the known universe... :) -- Oliver P. 00:26, 9 Mar 2004 (UTC)
So how about the number of digits of THAT number (namely, the number you are saying you'd write down)??
How many times would you have to take the logarithm of the number acquired in the previous step (starting with Graham's number) before you get a number that can be written using the particles in the known universe? :) Fredrik 00:37, 9 Mar 2004 (UTC)
- Even the number of times you take the logarithm, to get a writeable numbers, is too large to write. That is, you don't just take a logarithm once, or twice, or even a million times; if you took the logarithm times (about as many as you could write with all the particles in the universe) you still wouldn't be even in the ball park of being able to write out the result - even if you had any particles left over to do it 8^) In fact, even the number of times you had to take the logarithm, is so large that the number of times you had to take its logarithm, is too large to write - and so on, and so on, 65 layers deep... Securiger 16:49, 3 May 2004 (UTC)
I don't see any way to visualise the magnitude of the number.... my only conclusion is 'whoa this is large'. Can't we find some kind of magnitude comparison to help out to fathom G? — Sverdrup 00:42, 9 Mar 2004 (UTC)
- Well, if you'll allow me to define numbers in O notation, then Graham's number is O(A64). This grows faster than exponentiation, iterated exponentiation, iterated iteration of exponentiation, etc. So Graham's number simply cannot be written this way, or any related way. Conway's chained arrow notation, however, is sufficient to write Graham's number. --67.172.99.160 02:35, 19 March 2006 (UTC)
- 'Sides, this oughtn't desterb us any more'n Pi does. --VKokielov 00:36, 4 February 2007 (UTC)
-
- Pi is infinitely specific; I can still round it to 3. The closest I can come to visualizing Graham's number is ∞-1 (i.e. less than ∞, but still more than I could ever even hope to imagine). --WikiMarshall 06:17, 26 June 2007 (UTC)
- Think of a number, any number. Is it even bigger than G? If you picked randomly, almost surely yes. Graham's number is way, way, smaller than infinity. Feezo (Talk) 08:55, 26 June 2007 (UTC)
- Pi is infinitely specific; I can still round it to 3. The closest I can come to visualizing Graham's number is ∞-1 (i.e. less than ∞, but still more than I could ever even hope to imagine). --WikiMarshall 06:17, 26 June 2007 (UTC)
-
-
-
- Impossible. You can't pick a random natural number in such a way that every natural number is equally likely. There is no uniform probability distribution over unbounded sets. A similar, but mathematically legitimate, way to make the point is to say that nearly all of the natural numbers are larger than Graham's number. Maelin (Talk | Contribs) 09:24, 26 June 2007 (UTC)
-
-
Is there anybody who can even comprehend this? —Preceding unsigned comment added by 63.230.49.232 (talk) 19:15, 5 October 2007 (UTC)
- That depends on how you define 'comprehend'. Nobody can grasp the real magnitude of this number, not even Graham himself, but maybe some mathematicians are able to understand what's involved when you deal with such numbers. That number just happened to have a certain property that made it useful for that proof, but it could well be that it is totally useless for any other purpose, because it's not like you could start calculating stuff with it or even analyze it. Normal people like me can just say "Wow!" and have to accept the fact that we simply don't understand and will never succeed in doing so.83.79.48.238 (talk) 18:04, 6 February 2008 (UTC)
[edit] An amusing exercise for the student
Obviously Graham's number is divisable by three, and hence not prime. And since all powers of three are odd, obviously Graham's number minus one is even, and hence not prime. Question: is Graham's number minus two prime? Prove your result. Securiger 16:49, 3 May 2004 (UTC)
- So, Graham's number is odd? 130.126.161.117 20:09, 19 October 2007 (UTC)
- Yes, that can be proven. An integer is odd if it has no 2's in its prime factorization. Graham's number's prime factorization is 3^(huge number). Georgia guy 21:13, 19 October 2007 (UTC)
- Securiger, I know enough to know that Graham's number minus 2 is not prime. Given that it is a number of the form 3^3^3^3^3^3^3^3... with at least 2 3's, its final digit is a 7. A number ending in 7 will end in 5 if you subtract 2, and apart from the single digit 5, a number ending in 5 is not prime because it has 5 as a factor. Therefore, Graham's number minus 2 is not prime. And, Graham's number minus 3, which is even, is not prime. Georgia guy 22:26, 8 Mar 2005 (UTC)
-
- Interesting excercise, but I don't quite follow this. How do you prove that for n > 1 ends in a 7? It's clear to me that if you take the last digit of the sequence you get the sequence that is the infinite repetition of {1, 3, 9, 7}, so must end with a 7, which is simple modulo math. But how do you continue from there? —Preceding unsigned comment added by 83.79.48.238 (talk) 18:48, 6 February 2008 (UTC)
- All 3^n where n is odd will become evenly divisible by 4 if you add 1. All 3^n where n will become divisible by 4 if you add 1 (3^3, 3^7, 3^11, 3^15) ends in a 7. I know enough information to show that Graham's number ends in a 7 and its next-to-last digit is even. Georgia guy (talk) 19:25, 6 February 2008 (UTC)
- Interesting excercise, but I don't quite follow this. How do you prove that for n > 1 ends in a 7? It's clear to me that if you take the last digit of the sequence you get the sequence that is the infinite repetition of {1, 3, 9, 7}, so must end with a 7, which is simple modulo math. But how do you continue from there? —Preceding unsigned comment added by 83.79.48.238 (talk) 18:48, 6 February 2008 (UTC)
[edit] Wow
I tried figuring out just how big this number is, and my mind blew a gasket when it got to the 4th Knuth iteration. Yikes. I'm glad there's only 1e87 particles in the universe, otherwise some wacko would already be trying to write it out. --Golbez 06:53, Sep 3, 2004 (UTC)
[edit] Operation needed for Graham's number
Recall from the above the Graham's number is to large to visualize simply with the help of tetration. Try the question "What operation is needed to visualize it??" Please highlight it in the list below, defining each operation as being related to the previous one the way multiplication is related to addition.
- Addition
- Multiplication
- Exponentiation
- Tetration
- Pentation
- Hexation
- Heptation
- Oktation
- Enneation
- Dekation
- Hendekation or above
66.245.74.65 23:51, 30 Oct 2004 (UTC)
- None of these are even close to sufficient. Regular exponentiation requires 1 Knuth arrow to be represented, and dekation requires 8. Dekation in itself is a very powerful operator that produces large numbers. But for every layer of the 65 layers that Graham's number goes further into, what happens is that the number of arrows is equal to the result of the previous step. , so we have 7625597484987 arrows already. Which is much, much larger than dekation. The next step will have 37625597484987 arrows, a gargantuan number. This is just the number of arrows. And we're doing this 65 times. -- Daverocks 07:42, 19 August 2005 (UTC)
-
- Sorry, I have to correct a fact in my own statement here. You start with , and this is the number of arrows required for the next step, and so on and so on (one does not start with as I stated above). -- Daverocks 12:12, 30 August 2005 (UTC)
Could you represent Grahams number as 3^643?
- No. You've missed the recursive definition here. In the sequence, . This is already a huge, huge number. Then, to make the next number, we put that huge number of up arrows between two threes, like this: . The number of uparrows in any gi is equal to the actual number g(i − 1). So g1 is huge, but g2 has g1 up arrows (which makes it really, really big - much, much bigger than g1), and then g3 has g2 up arrows, etc. Maelin (Talk | Contribs) 08:26, 28 December 2006 (UTC)
[edit] Comparison
Is it bigger than googol? Googolplex?
Yes. It is even bigger than googolduplex, googoltriplex, and even googolcentuplex, defining googol(n)-plex as the number 10^10^10^10^...10^10^10^10^googol where there are a total of n 10's. 66.245.110.114 23:55, 6 Dec 2004 (UTC)
- Adding to that, one can easily find numbers larger than a googolplex, even though it is also unconceiveably large. However, is larger, and can be written with fewer symbols. Remember that a googolplex is , which requires 7 symbols to be written down. The above-mentioned number only requires 6. -- Daverocks 10:30, 8 August 2005 (UTC)
-
- BTW, isn't already larger than ? --FvdP 20:56, 19 August 2005 (UTC)
-
-
- Actually, FvdP, you're probably right. -- Daverocks 12:05, 30 August 2005 (UTC)
-
-
-
-
- Definately right in fact; stacking new power layers is much more powerful than incresing the numbers of lower layers. (which is why the procedure for G gets so very large very fast.) For example, which is clearly larger than
-
-
-
-
-
-
- Still informally but with more precision, here is another way to see it, by taking twice the logarithm on each side: so , while similarly, . There is (almost) no more dispute which is greater ;-) You see here how the actual numbers in the lower exponentiation layers just seem to evaporate and play no significant role in the proof. --FvdP 22:08, 23 December 2005 (UTC)
-
-
-
[edit] notation
Are you saying that even if you used the notation like 10^10^100^100000000 etc, there isnt enough ink or hdd space in the universe to express graham's #? 69.142.21.24 05:20, 5 September 2005 (UTC)
- That is correct. Graham's number is far too large to simply be expressed in powers of powers of powers. Even if one takes the exponentiation operator to a higher level, such as tetration or many (actually, an incomprehensible number of) hyper operators above this, it would still not be even close to sufficient to expressing Graham's number with all the matter in the known universe. -- Daverocks 12:32, 19 September 2005 (UTC)
[edit] Largest number with an article
I assume from the above discussion that this is the largest number to have a serious Wikipedia article devoted to it. Is that correct? CDThieme 03:02, 8 October 2005 (UTC)
- Well, the greatest integer. We have plenty of articles about infinite numbers, and Graham's number is finite. Factitious 19:51, 10 October 2005 (UTC)
- Can numbers be infinite? I don't think so, a number is a number. Infinite is a concept. Rbarreira 19:24, 20 December 2005 (UTC)
- See Aleph-null, for instance. Factitious 22:32, 21 December 2005 (UTC)
- There are larger finite integers in math. In Friedman's Block Subsequence Theorem, n(4), the lower bound for the length of the string for four letters, is far, far larger than Graham's number.Mytg8 14:42, 4 May 2006 (UTC)
- Numbers are concepts. -- SamSim 20:06, 17 January 2007 (UTC)
- See Aleph-null, for instance. Factitious 22:32, 21 December 2005 (UTC)
- Can numbers be infinite? I don't think so, a number is a number. Infinite is a concept. Rbarreira 19:24, 20 December 2005 (UTC)
[edit] Years in Hell
Imagine being in Hell for a Graham's Number of years. (It's said that people stay in hell for all eternity...) If someone was taken out of Hell after that many years, what could they say and how would they feel? Think about and imagine this. --Shultz 00:15, 2 January 2006 (UTC)
- Nonsense. The universe is only 13,700,000,000 years old, a number that can be written in comma format with only 4 periods of digits. Graham's number is far too large to write in comma form with even millions of such periods. Georgia guy 01:10, 2 January 2006 (UTC)
-
- This reminds me of a scene in the 1976 Australian movie The Devil's Playground. The novelist Thomas Keneally played a priest who was leading young seminarians in a 3-day silent retreat, and he gave them a little spiel before the retreat began. He spoke about the length of eternity, and he likened it to a huge metal sphere, as large as the Earth, somewhere out there in space. Once every century, a small bird flies past and brushes the sphere lightly with its wing. When the sphere has been completely worn away by the action of the bird, eternity would hardly have even begun. The lesson presumably being, be good otherwise you're a very, very long time in Hell. Anybody care to work out how long this would take? Please state all your assumptions. This might be a hopelessly small approximation of the size of Graham's number, but it must be somewhere along the track. JackofOz 03:24, 19 May 2007 (UTC)
-
-
- On 10 seconds' reflection, I realised that even if the bird removed only one elementary particle each time it flew past, that would still only add up, eventually, to the number of elementary particles in something the size of the Earth. The number of elementary particles in the Universe is way, way higher than that, and Graham's number is way, way higher again. So, it is indeed a hopelessly small approximation. Scrap the calculation. Unless you just want some mental exercise, that is. :) JackofOz 03:42, 19 May 2007 (UTC)
-
- Shultz' proposition seems invalid to me, since 'hell' is a religious concept that is not necessarily bound to our universe. You'd have to define the concept 'time' in the context 'hell' first, which hardly seems possible in a meaningful way. And even if you do that, Georgia guy's reply is still invalid, because that defnition of 'time' will not be comparable to our universe's definition of 'time'. (Yes, I know that the bible as well as other sources have citations that hint at hell being bound to our universe (for example that there is sulfur in hell), however I believe those sources merely tried to explain the unknown in known terms in order to make it more graspable. I believe hell is not bound to (but still may or may not interact with) our universe.) —Preceding unsigned comment added by 83.79.48.238 (talk) 18:30, 6 February 2008 (UTC)
[edit] Letter
Since this is obviously too large a number to write down, is there a letter to represent it in common usage?
"Graham's Number" is used to refer to it.--Lkjhgfdsa 20:29, 19 April 2006 (UTC)
- There's nothing like e or π. If context of the discussion is known one might use G. Kaimiddleton 07:50, 20 April 2006 (UTC)
[edit] Upper bound of something
You know with this nummer being so large, how on earth was it proven to be the upper bound of that question?
- Maybe even more specific: does anyone know where to find the proof. I can find a zillion articles about Grahams number, how big it is and why. But the proof itself I can not find. Is it available in public domain? Pukkie 07:37, 2 June 2006 (UTC)
- That's a good question. According to the magazine article that announced the discovery back in 1977, the proof was unpublished. As far as I know, it has never been published. Very odd. Of course, Friedman has never published his proof of n(4) either... :-) Mytg8 15:48, 2 June 2006 (UTC)
[edit] Colloquial statement of the problem
Can someone clarify this for me? The problem connected to Graham's number is described as: "Take any number of people, list every possible committee that can be formed from them, and consider every possible pair of committees. How many people must be in the original group so that no matter how the assignments are made, there will be four committees in which all the pairs fall in the same group, and all the people belong to an even number of committees." What does this mean? I understand the first part, taking n people and listing all 2^n committees, and then considering all (2^n choose 2) pairs. I don't understand anything else except that all people are in an even number of committees. Thanks. :) CecilPL 20:57, 12 May 2006 (UTC)
- That's a tricky question, I find it hard to be both correct and coloquial. I'll give it a try:
"Take a number of people, list every possible committee that can be formed from them, and consider every possible pair of committees. Now assign every pair of committees to one of two secretaries. How many people must be in the original group so that no matter how the assignments are made there will always be a secretary with a group of four committees where all the people belong to an even number of these four committees." Pukkie 10:52, 7 June 2006 (UTC)
- I do think that's better. Any one object to it replacing the description in the article? --Michael C. Price talk 08:19, 23 June 2006 (UTC)
There is a serious problem with the 'colloquial statement', both in its old form and in the new one. It yields only a necessary condition for the four points to be complanar, not a sufficient one. This means that, at least a priori, there is good reason to assume that a lower number is sufficient for guaranteeing a monochromatic 4-set satisfying the 'colloquial statement' than the original one.
I've tried to find the origin of the 'colloquial statement'. It is found also in the mathworld article on Graham's number, which refers to some popular articles I do not have access to. It also refers to the original article, in Transactions of the American Mathematics Society. I added a reference to this, but didn't find the 'colloquial statement' there. Could someone please search the source of it; since it is rather nice, it would prefer improvement to deletetion.
A short but technical explanation of the trouble: Adopt the article notation. Note, that there are 6 sub-2-sets of any given 4-set of vertices. Thus, let n=6, and choose the four vertices in such a way that each one of the six coordinates will be 1 in exactly one of these six 2-sets, and 0 in its complement. Explicitly, you may pick the four vertices A, B, B, and D with coordinates
-
- (1,1,1,0,0,0), (1,0,0,1,1,0), (0,1,0,1,0,1), and (0,0,1,0,1,1),
respectively. This set does not lie in one plane; if you consider the vectors , , and , you'll find that they are linearly independent. Thus it is not a 4-set of the kind Graham and Rotschild discuss.
On the other hand, if we give the 'colloquial' interpretation, we should consider A, B, C, and D as four committees with three members each, formed within a set of six people (one for each coordinate). If the six people are named a, b,..., f, then we find that A consists of a, b, and c, which we briefly may write as A=abc. Similarly, B=ade, C=bdf, and D=cef. Thus, each person is a member of an even number (namely 2) of the committees.
This is not so interesting for the case n=6, of course; but the construction could easily be extended to many millions of 4-sets, if e.g. n=30. JoergenB 18:54, 26 September 2006 (UTC)
- You are right, I will delete the colloquial version. If someone has a good way to say 'in the same plane' colloquially we can put the corrected version back. Pukkie 08:20, 21 October 2006 (UTC)
[edit] But???
Think of this all hypothetically... What if one could create a Solar System-sized computer that was made ONLY to store Graham's number in exponential form. all calculations could be routed from a diffrent computer. can it be done? —Preceding unsigned comment added by Geobeedude (talk • contribs)
- No, it can't. Even the number of Knuth arrows is too large to be stored that way. --Ixfd64 18:23, 19 May 2006 (UTC)
- More to point, if I recall my calculations correctly, the number of digits reaches beyond the number of particles in the universe by the 4th iteration. So by the time you hit the 64th, ... well let's put it this way. Let's say you stored it in decimal (It would be binary, but let's go with this for simplicity). You need at least one particle for each digit, because that's how the computer's memory works. Even if it's so much as an electron in an electrical field, that digit has to be "stored" by something. So you would have to use the entire universe's worth of particles just to store the digits of the 4th iteration - let alone the 64th. You would need an unimaginably large number of *universes* to even begin writing the number down. --Golbez 08:12, 20 June 2006 (UTC)
- No, even the first iteration, g1 is greater than the greatest number storable using every particle in the universe as a binary digit. Think about it, look at that giant monster of a cross-cross-braced number the article states is equal to g1, and take the trinary log of it. That's the number of trinary digits it takes to store it, so the number of binary digits neccessary is much greater. Now ask yourself, is that number, which is essentially indistinguishable from g1 because it is so gigantic, greater than 10^80? I rest my case.
In fact, all discussion in this thread up to this point except that talking about knuth's up arrow can replace "G" with "g1" and still be essentially correct. Even this is far, far too great to begin comprehending in any normal sense. That is the size of the OPERATOR used in g2... —Preceding unsigned comment added by 24.165.184.37 (talk) 03:20, 11 March 2008 (UTC)
- Sorry, forgot to signEebster the Great (talk) 00:08, 9 April 2008 (UTC)
[edit] numbers larger than Graham's number
It seems that Graham's number is no longer the largest number used in serious mathematical proofs. See some of the work by Harvey Friedman, for example. He mentions numbers like n(4) and TREE[3]. [1] [2] Should we mention these numbers in the article? --Ixfd64 18:28, 19 May 2006 (UTC)
- Maybe but just mentioning them is a bit short. Also: I do not understand TREE[3]. How big is this in Conway chained arrow notation? Pukkie 11:36, 30 May 2006 (UTC)
-
- I read the above papers some time ago. As I recall, this number can only be descibed with more than 2^1024 symbols in any notation.Mytg8 16:45, 20 June 2006 (UTC)
- Well, you could create a new notation that asks for the number of symbols required in your case, couldn't you?? Georgia guy 16:49, 20 June 2006 (UTC)
- I read the above papers some time ago. As I recall, this number can only be descibed with more than 2^1024 symbols in any notation.Mytg8 16:45, 20 June 2006 (UTC)
What is n() or TREE()?
-
-
-
- Well, we could try Jonathan Bowers' array notation. --Ixfd64 22:35, 21 June 2006 (UTC)
-
-
Actually, In Graham's and Rotschild's original article, they prove much more general statements. The Graham number estimate is just one of the first one deducible from their proof, out of an infinite series. JoergenB 18:05, 26 September 2006 (UTC)
- According to the original 1977 Scientic American article that announced the number, Martin Gardner stated that the proof for what is called Graham's number was unpublished. AFAIK, the proof has never been published. I'm not familiar with that 1971 Graham and Rothschild paper; perhaps it was an earlier estimate but it's not the same number. A Usenet discussion of this topic-http://groups.google.com/group/sci.math/browse_thread/thread/3f2bec34954af48c/28b726a9fe028bcb?lnk=gst&q=%22graham%27s+number%22&rnum=3#28b726a9fe028bcbMytg8 04:24, 27 September 2006 (UTC)
- I do not have access to the SciAm article, but certainly consider Graham's and Rotschild's as the original... When you write Martin Gardner stated that the proof for what is called Graham's number was unpublished, I hope you mean something like 'the proof of the fact that this number indeed is an upper bound'. (A number in itself does not have a proof, not even in Gödel's theory.) Does he really claim this to be unpublished; and doesn't he mention the G&R article Ramsey's theorem for n-parameter sets at all?
- The reference is interesting; I'll try to analyse the relationship between the original G&R number and the Gardner-Graham one as soon as I get the time.
The representations are different, but that doesn't automatically mean that the numbers are.Actually, as G&R define their estimate, it is a power of 2 rather than a power of 3, which does indicate that the numbers are not quite the same:-) However, even so, it would be a surprise if the older number was smaller, for obvious reasons. Thus, I'll have to correct my corrections; but we also have a potential candidate for a larger number than the GG one, from the horse's own mouth. JoergenB 23:03, 28 September 2006 (UTC)- As I wrote in the posting that began the cited sci.math thread in 2004, the upper bound published by Graham & Rothschild in 1971 is much smaller than the one that appears to have been strangely attributed to Graham by Gardner in 1977. It seems obvious that Gardner's was a misattribution unless Graham & Rothschild had it wrong in 1971 -- and afaik no publication has made that claim. (In fact, Exoo's 2003 paper refers to the 1971 value -- not Gardner's -- as "Graham's number".) But maybe it's best to just accept the popular misattribution, and continue referring to the so-called "Graham's number" by that name, even though it's apparently not Graham's at all. --r.e.s. (Talk) 00:45, 29 September 2006 (UTC)
The correct thing to do is of course to go through the proof of G&R, and deduce the estimate from that :-(. Also, there is a slight vagueness in their article; what they write is precisely (my emphasis) '... the best estimate for N * we obtain this way is roughly
-
- '
Now, I do not know what 'roughly' means, without further investigations. I'll have to read the whole stuff, sometime, I fear.
Another question to you people who have the SciAm article: Could you please check whether the secretary parable is there, and if so, if there were other conditions added? 'Our' article is still defect, and I haven't yet got any reaction to my comment in the section on the colloquial statement of the problem (supra). It is clear that this cannot be left in an erroneous form, and I'd prefer a correction to a deletion. JoergenB 10:12, 29 September 2006 (UTC)
-
- I have an old Xerox copy and here are some quotes from the SciAm article-"Mathematical Games";volume 237 pp 18-28 November 1977--if they'll help anything. The article is on Ramsey graph theory, author Martin Gardner, and is informal-as you would expect from a popular journal. He goes into some history of Ramsey Graphs, then p.24 brings up Ronald L. Graham, "...one of the nation's top combinatorialists...(who) has made many significant contributions to generalized Ramsey theory... unquote. Then a couple paragraphs later ...this suggests the following Euclidean Ramsey problem: What is the smallest dimension of a hypercube such that if the lines joining all pairs of corners are two-colored, a planar K_4 will be forced? ...The existence of an answer when the forced monochromatic K_4 is planar was first proved by Graham and Bruce L. Rothschild in a far reaching generalization of Ramsey's theorem that they found in 1970. *(the 1971 paper?)* (continuing) Finding the actual number, however, is something else. In an unpublished proof Graham has recently established an upper bound, but it is a bound so vast that it holds the record for the largest number ever used in a serious mathematical proof. End quote.
-
- The 1971 G&R article indeed was 'received by the editors' 1970. Math articles may have considerable time lags.JoergenB 21:34, 29 September 2006 (UTC)
-
- Gardner then proceeds to describe the usual description of the number using Knuth's arrows, starting with 3^^^^3, continued for 2^6 layers(as he terms it). Back to quote: It is this bottom layer that Graham has proved to be an upper bound for the hypercube problem, unquote. No mention of secretaries, committees, etc. But wait, there appears to be another, earlier, estimate :-)http://www.math.ucsd.edu/~fan/ron/papers/78_02_ramsey_theory.pdf p.11?Mytg8 19:41, 29 September 2006 (UTC)
- I have an old Xerox copy and here are some quotes from the SciAm article-"Mathematical Games";volume 237 pp 18-28 November 1977--if they'll help anything. The article is on Ramsey graph theory, author Martin Gardner, and is informal-as you would expect from a popular journal. He goes into some history of Ramsey Graphs, then p.24 brings up Ronald L. Graham, "...one of the nation's top combinatorialists...(who) has made many significant contributions to generalized Ramsey theory... unquote. Then a couple paragraphs later ...this suggests the following Euclidean Ramsey problem: What is the smallest dimension of a hypercube such that if the lines joining all pairs of corners are two-colored, a planar K_4 will be forced? ...The existence of an answer when the forced monochromatic K_4 is planar was first proved by Graham and Bruce L. Rothschild in a far reaching generalization of Ramsey's theorem that they found in 1970. *(the 1971 paper?)* (continuing) Finding the actual number, however, is something else. In an unpublished proof Graham has recently established an upper bound, but it is a bound so vast that it holds the record for the largest number ever used in a serious mathematical proof. End quote.
-
-
- Interesting! But this 'new old' G&R paper is markedly longer. Well, we have some reading to do, I guess. JoergenB 21:34, 29 September 2006 (UTC)
-
-
- One more link- http://groups.google.com/group/sci.math.research/browse_thread/thread/dfe28ba0cb00f7bc/5ade381fadaf1485?lnk=st&q=%22graham%27s+number%22&rnum=12#5ade381fadaf1485 In this 2002 topic and response to Exoo, author tchow says he asked Graham himself about the number and Graham said the theorem was unpublished.Mytg8 15:03, 30 September 2006 (UTC)
[edit] Graham's number tower image
Isn't there a convenient way of expressing Graham's number by simply writing out the 64-layer arrow tower? That will knock some sense into those people.
The first layer is 3^^^^3, which denotes the number of arrows in the below layer, with that number (an insane number of arrows) denoting the number of arrows below that layer, and this recursive operations continues for 64 layers.
The number in the second layer is already way larger than the number of particles in the universe...
(And if i'm not mistaken, Graham's Number is the upper bound for the number of dimensions required for a standard hypercube, when its corners are 2-colored, forces a tetrahedron? (Grr...Wikipedia does not have an article describing the Ramsey graphs)Doomed Rasher 22:58, 3 October 2006 (UTC)
[edit] How large is 3^^^^3?
I've been trying to come to grips with Graham's Number and have been truly awestruck by how staggeringly, inconceivably huge it is. But it's obvious from the various questions that few people have any idea of the scale of it. I've tried working out some idea of 3^^^^3 but I can't get anywhere (probably due to my infamiliarity with Knuth arrow notation). How big is g1? g2? How many in the sequence can be represented in some kind of mainstream notation? -Maelin 11:52, 12 October 2006 (UTC)
- Take a look at Ackerman_function#Ackermann_numbers where there is an example of 4^^^^4. Kaimiddleton 21:35, 14 October 2006 (UTC)
- Oops, I notice there's an error there, currently. Look at this edit for a correct (though poorly formatted) version. Kaimiddleton 21:48, 14 October 2006 (UTC)
[edit] question on notation
wouldn't be the same as ? —The preceding unsigned comment was added by 199.90.15.3 (talk) 17:42, 11 December 2006 (UTC).
- No. Remember, the number of up-arrows in each term gn is equal to the actual number, gn-1. So we have which is very big. Then, , that is, there are g1 up-arrows between the threes in g2. That makes g2 quite colossally big. Then g3 has that many up-arrows between its 3s, and so on.
- With Knuth up-arrows, each new up-arrow blows you way out of the ball park. is huge, but makes it look tiny by comparison. And in the sequence gn by which Graham's number is defined, you're not adding one up arrow each time, you're adding inconceivably huge numbers of up arrows each time. And you do that 64 times. Graham's number is big. Maelin (Talk | Contribs) 22:06, 11 December 2006 (UTC)
[edit] The current definition of g_1 is wrong.
Actually, g_1 is much, much more larger than currently explained in the article. Please someone fix it.
(there are 327 3's in the exponent)
(there are 3's here (there are 327 3's in the exponent))
(there are 3's in the exponent (there are 3's in the exponent (...(there are 3 3's in the exponent)...))).
The last line above contains parentheses (there are 327 3's in the exponent in this line.) --Acepectif 18:10, 17 January 2007 (UTC)
I found out why the current article defines g_1 wrong. The current definition is , while g_1 requires four arrows. I'll try to fix it then. --Acepectif 20:19, 17 January 2007 (UTC)
Although my final result above is good in the sense that it doesn't contain any arrow, it looks too clumsy, using a more-than-astronomical number of parentheses. As it seems impratical (if not impossible) to use only exponentiation to express g_1, I decided to put some arrows there. --Acepectif 20:34, 17 January 2007 (UTC)
- I've been trying to represent that properly but I can't get the formula to display correctly. My best attempt is here. The brace at the side is way too tall but I can't figure out how to fix it. If anybody can see what the problem is please fix is and feel free to put it into the article. Maelin (Talk | Contribs) 01:02, 18 January 2007 (UTC)
-
- I've given up on getting it to render properly in TeX, so instead I doctored it up from the original output. I've put up an image of what I was originally trying to accomplish here. If anybody knows how to get it to display like this in TeX then please show us how. Maelin (Talk | Contribs) 06:46, 19 January 2007 (UTC)
- I like what Maelin is trying to do. In the meantime I've opted to include a second recursive step to explain things. It's not as succinct as I believe it could be, but it at least makes mathematical sense and (I think) conveys enough "bigness". -- SamSim 13:07, 18 January 2007 (UTC)
SamSim, your version is too abstract for me to put g_1 into some semi-conceivable form (I know that's the point, but I thought I had caught on to something). I may have fouled up a bit with my version, but is it true that = a power tower of 327 threes 327 times = a power tower of 3205891132094649 threes, or is the actual number of threes too large to write? Ashibaka (tock) 18:59, 18 January 2007 (UTC)
- I'm afraid it's not true. is actually equal to power tower of (a power tower of (a power tower of (... ... (a power tower of 3^27 threes) ... threes) threes) threes, where there are actually (a power tower of 3^27 threes) "power towers". As you can see this is pretty unwieldy. My crazy thing with f's is one way of expressing this; Maelin's LaTeX'd version is also correct, minor formatting difficulties aside - if that can be fixed, I am in favour of using it as a replacement. -- SamSim 13:51, 19 January 2007 (UTC)
I think this is all correct and rendered correctly now! Excellent work, folks. Hooray for maths. -- SamSim 14:23, 20 January 2007 (UTC)
What's the point in having such a verbose definition? Why isn't sufficient? If someone wants to find out what that actually means, they can go and look at the article on Knuth arrow notation. The only thing that obscenity serves to do is confused people and clutter up the page. Inquisitus 09:20, 6 February 2007 (UTC)
- The point is to give people an idea of how large it is. It's not about defining Knuth arrow notation, it's about giving an impression of the size of the first number in the g sequence is. Maelin (Talk | Contribs) 23:22, 6 February 2007 (UTC)
-
- In that case would it not be a better idea to keep the actual definition concise and readable, and have the current rendering as an aside, perhaps in a separate section? -- Inquisitus 10:23, 7 February 2007 (UTC)
-
- I don't think it's confusing or it clutters up the page. The equation does give the "concise and readable definition" with Knuth arrows. It's just an alternate and interesting way of looking at the number without arrows. Maybe the underbraces could be changed to an overbraces so you read the expression more naturally (from the top down). I think it can be extended to not only represent G_1 but Graham's number itself, but don't have the time to do it myself. Mytg8 15:46, 7 February 2007 (UTC)
I like the extra detail given in the definition. As an encyclopedia, wikipedia needs more to be explanatory rather than concise, as one might have in an advanced math text. However, I do think the word "threes" is weird and ambiguous. I think it should be removed. Other than that, I think expressing g1 with nested exponentiation is useful. Kaimiddleton 06:15, 9 February 2007 (UTC)
I've moved the large rending to its own subsection to keep things tidier, it could probably do with a bit of a clean up still though. Also, I agree that the 'threes' and 'layers' should be removed from the rending as they make it rather confusing. Inquisitus 09:17, 13 February 2007 (UTC)
[edit] Incorrect
Note that 3^3^3 = 7,625,597,484,987.
This first term is already inconceivably greater than the number of atoms in the observable universe, and grows at an enormous rate as it is iterated through the sequence.
This is very wrong. Someone should fix this. The number of atoms in the observable universe is closer to 10^86.
May the wind be always at your back.
- The sentence you quoted wasn't referring to the number 3^3^3, but rather to g1. I've made this clearer in the article to prevent similar misunderstandings. Maelin (Talk | Contribs) 04:47, 12 March 2007 (UTC)
[edit] Why can't this be...
((6628186054241871761051728642144797485889866738756864194627932674204612481132879281240720140750840325559008576910490612741357798194746021808214 8510938844709284883675387902470250878557607543139203723695055306418868995491259871239807975904046447471772644936318562205668469072142054280062341134 6656785162817900551337542270334990205437212700131838846883)^7625597484987)^64
Easily said IMO (I'm not sure if I'm right, but it must be close :b). --hello, i'm a member | talk to me! 00:42, 11 July 2007 (UTC)
- I'm not sure where you got that number from, but it looks like you have misunderstood either the Knuth up-arrows or the recursive definition. If you take a number, , and then add just one more up arrow to make , this new number y is, in comparison to x, astonishingly enormous. Unless x is something very small (say, less than 50), the second number will be phenomenally huge. Up arrows make things get very big, very fast. Then the recursive definition means that in the first iteration, g1, we already get an inconceivably huge number. We then put that inconceivably huge number of up arrows into the second iteration, g2. One arrow makes things blow up rapidly. We're putting in an inconceivably huge number of up arrows. g2 is totally beyond the possibility of human comprehension. And then we repeat the process 63 more times. Graham's number is big.
- If you've got some programming experience, try writing a program to calculate g1 based on the recursive definition of up arrows. This should give you an idea of how big it is. Maelin (Talk | Contribs) 03:52, 11 July 2007 (UTC)
[edit] Bailey Notation
The article states the Bailey Notation is 4?3&62, shouldn't this be 4?3&64 ? The page on Bailey Notation claims it is 4?3&63 but I think this is wrong too. That page seems confused about the number of iterations in the construction. Bailey Notation is AfD anyway so this may not matter. If everyone thinks I'm right and the page on Bailey Notation isn't deleted I'll correct it. Smithers888 23:13, 7 October 2007 (UTC)
[edit] Add "layers" to the Definition of Graham's number section
The first PNG in that section could be made clearer by adding "layers" after 64, similar to what's done under Magnitude of Graham's number. It took me ages to figure out what it meant, and how it was 'equivalent' to the second given definition. I tried to do it, but my attempts were futile. --63.246.162.38 21:25, 13 October 2007 (UTC)
- Done. --r.e.s. 22:23, 14 October 2007 (UTC)
-
- Thanks--63.246.162.38 11:13, 23 October 2007 (UTC)
[edit] Question
Very interesting article. I wonder: while the number could not be expressed using exponentation, due to their being too many digits in the exponent, I wonder if the number could be rendered by way of tetration? -- Anonymous DissidentTalk 11:12, 4 November 2007 (UTC)
- Tetration is expressed with two upward arrows, here we have many.--Patrick 11:21, 4 November 2007 (UTC)
- Too many to fit into the observable universe? -- Anonymous DissidentTalk 11:28, 4 November 2007 (UTC)
-
-
- Yes.
-
-
-
- i.e., to express the number of arrows we need 63 layers.--Patrick 07:38, 5 November 2007 (UTC)
-
[edit] Upper Most Calculation
While trying to work this out (I still don't believe it's not possible to write out. Maybe you just think it is, just remember Fermat's Last Theorem does have a solution) I came to the conclusion that I need a bigger computer. Mine can only calculate up to 3e91023 which comes out to be 1.0185171539798541759013048468071e43429.
- That number has thousands of digits, but that is a small enough number to get a writeable number by taking its base 10 logarithm just once, which Graham's number is much too large for. Georgia guy (talk) 18:32, 7 February 2008 (UTC)
- Agreeing with Georgia guy, a bigger computer will not help you with that. You have to understand that even if you manage to take all of the universe's approx. 1080 atoms and manage to arrange them in the way that gives you the most powerful computer possible, it will still not be powerful enough to calculate Graham's number. Even if every single of those atoms can display a million digits, you don't have enough atoms to display Graham's number. -- However, it's maybe not correct to state that "You cannot write down Graham's number", because at least in theory, every finite number can be written down using finite resources, and Graham's number is finite. The problem is that it isn't practically feasible, because the resources in time and matter exceed by very far anything that our universe offers. 85.0.177.231 (talk) 22:56, 9 February 2008 (UTC)
- I'd like to know where we came up with the number of atoms in an ever-expanding universe. It seems to be much to small a number, and in any case, it's just a guess since we've explored so little. Not only that, but one 70 kilogram human contains about 7e27 atoms[1], so 6.5 billion humans (not taking into account those who are more (or less) than 70 kg) contain quite a few atoms (4.55e37, by my count), not nearly 10e80, but then take into account the number of atoms in earth, then the solar system, etc... Take into account super dense masses, stars, not to mention atoms floating in space, with no stellar masses associated with it. Then you can parse your statement, everyone says particles, not atoms, so wouldn't you have to count protons, neutrons, and electrons? Those number would make our hearts stop, if we could even wrap our tiny little brains around them. 75.44.29.49 (talk) 20:01, 12 February 2008 (UTC)
- An expanding universe does not imply that the number of atoms is increasing, but I'm not denying that possibility either. But since you wonder where this number came from, check Observable_universe. As you can read there, this number is only for the observable universe, I guess that makes quite a difference, but this all is just a very rough guess anyway. As for the 'atom' vs 'particle', you might want to know that someone edited the article with the edit summary saying this: Changed "particles" to "atoms" in the leading paragraph to match the ending, as the number of particles (including virtual particles andQuantum Foam) could make a cup of tea contain infinite particles. jlh (talk) 00:32, 20 February 2008 (UTC)
- I'd like to know where we came up with the number of atoms in an ever-expanding universe. It seems to be much to small a number, and in any case, it's just a guess since we've explored so little. Not only that, but one 70 kilogram human contains about 7e27 atoms[1], so 6.5 billion humans (not taking into account those who are more (or less) than 70 kg) contain quite a few atoms (4.55e37, by my count), not nearly 10e80, but then take into account the number of atoms in earth, then the solar system, etc... Take into account super dense masses, stars, not to mention atoms floating in space, with no stellar masses associated with it. Then you can parse your statement, everyone says particles, not atoms, so wouldn't you have to count protons, neutrons, and electrons? Those number would make our hearts stop, if we could even wrap our tiny little brains around them. 75.44.29.49 (talk) 20:01, 12 February 2008 (UTC)
[edit] It's not physically possible to display the 1st layer.
It's not possible to compare the first layer with anything physical (elementary particles, atoms, paper clips).
However, as a side note, there're ways to iterate even bigger (but useless) numbers.
Like extending the iteration steps from 64 to 3^^^^4 times.
and taking the output from that operation (say, k)
and performing it on 3^(k-1)3, and iterating it k times.
and performing the above sequence k times.
Anything's possible. —Preceding unsigned comment added by 203.116.59.13 (talk) 05:58, 15 February 2008 (UTC)
- But there's a limit as to what is useful, which is what makes Graham's Number special. 137.205.17.105 (talk) 17:10, 28 February 2008 (UTC)
[edit] Ridiculous understatement
The following sentence, from the introduction, is a ridiculous understatement:
"It is too large to be written in scientific notation because even the digits in the exponent would exceed the number of atoms in the observable universe so it needs its own special notation to write down."
While true, it is somewhat akin to saying that a googolplex is too large to write down in the form . Graham's number is incredibly much larger than this sentence implies. I'll try to come up with a suitable replacement, but if someone else has an idea it would be nice to put that in. 75.52.241.166 (talk) 02:29, 19 February 2008 (UTC)
- A very impressive statement (in the context of normal numbers) is not a ridiculous understatement.--Patrick (talk) 08:20, 19 February 2008 (UTC)
- I agree with you that it's a massive understatement, but then again, I don't think it's likely that one could ever come up with a phrase that is not a massive understatement. It's very very very difficult to grasp the magnitude of Graham's number. I, for one, am not able to fully grasp it and neither can the English language, in my opinion. This being said, I'm not against replacing that statement with something better, if you can find something; good luck! jlh (talk) 00:38, 20 February 2008 (UTC)
- I'll take a stab at it:
- "It is too large to be written in scientific notation, since the number of digits in the exponent would exceed the number of atoms in the universe. In fact, the number of times one would have to take the number of digits to reduce it to a reasonable size (it doesn't matter how you define 'reasonable size') is itself unimaginably large."
- This remains a dreadful understatement, but it's better. It will also hopefully get into some people's heads from this statement that 10^10^10^10... does not begin to describe Graham's number. --69.140.102.13 (talk) 21:34, 13 March 2008 (UTC)
-
- I should perhaps clarify: if no one has any comment after a few days, I'll put it in. So if you have anything against putting it in, speak now (preferably). --69.140.102.13 (talk) 14:48, 14 March 2008 (UTC)
-
-
-
- I agree that it is a little vague. Do you have any suggestion for how to improve on it? Please, feel free to write your own version. I just thought I'd take the initiative, since no one else seemed to be willing to do so. --69.140.102.13 (talk) 17:08, 14 March 2008 (UTC)
-
-
- Alright, I'll take another crack at it. Placing this at the end of the paragraph instead of where it currently is:
- "There is no concise way to write Graham's number, or any reasonable approximation, using conventional mathematical operators. Even power towers (of the form ) are useless for this purpose. It can be most easily notated by recursive means using Knuth's up-arrow notation or the Hyper operator."
- Again, comments welcome. --70.124.85.24 (talk) 11:47, 25 March 2008 (UTC)
-
- In a couple days, if no one has any negative comments, I'll modify accordingly. So if you have any problems with it, please speak now to avoid editing wars. --70.124.85.24 (talk) 21:56, 28 March 2008 (UTC)
-
-
-
- Done. --70.124.85.24 (talk) 15:35, 30 March 2008 (UTC)
-
-
How about "number of cubic nanometers in the visible universe cubed"? I know it doesn't even get close but it's as large as can be intuitively understood. --222.101.9.201 (talk) 01:14, 17 May 2008 (UTC)
- As opposed to "the number of atoms in the universe"? This is like standing on a step ladder to get a closer view of the stars. One is 10^80, the other is 10^105. Even the number of different permutations to arrange all the universe's atoms in order, 10^(10^82), doesn't get us any closer. Owen× ☎ 14:04, 17 May 2008 (UTC)
[edit] Confusing sentence
"More recently Geoff Exoo of Indiana State University has shown (in 2003) that it must be at least 11 and provided proof that it is larger."
Is the "provided proof that it is larger" necessary? Isn't that implicit with "at least 11"?-Wafulz (talk) 03:34, 20 March 2008 (UTC)
- No, because it could be exactly 11. But if he provided proof that it was larger, couldn't you condense it to: "...it must be greater than' 11"? -- trlkly 18:28, 21 March 2008 (UTC)
-
- They probably meant that he only has specific examples up to 11. Black Carrot (talk) 00:11, 12 April 2008 (UTC)
-
-
- Does anyone have a reference on that? That is, a reference other than his webpage? Black Carrot (talk) 00:19, 12 April 2008 (UTC)
-
[edit] 3^^^^3 is already "inconceivably large"
someone has to put the size of this number in words, I just start to comprehend how big this number is, its driving me loopy. Its 3^3^3 how many times? aaaahhhh!! Please help me someone! --LeakeyJee (talk) 14:41, 2 June 2008 (UTC)
That's the problem, really. Its size is totally incomprehensible. If you try to break it down into a building-up process where you have a comprehensibly large increase at each step, you need to repeat it an incomprehensible number of times. Maelin (Talk | Contribs) 04:53, 3 June 2008 (UTC)
- ok, what if we raised the number of particles in the universe to the power of itself, then that number to the power of itself, then that number to the power of itself, etc. until you had done this as many times as there are particles in the universe? Are we getting close-ish here? --LeakeyJee (talk) 07:38, 8 June 2008 (UTC)
- Nope. The number you describe is less than , so you're not even really approaching g1, much less g64. If you can define it without multiple steps of recursion, chances are you're nowhere near G. Owen× ☎ 13:34, 8 June 2008 (UTC)
- Ummm... I hate to disagree, but LeakeyJee's number, denoting P as the number of particles in the universe, is larger than , which is much larger than . Still not very large compared to Graham's number, though. Here's LeakeyJee's number to just two levels (he/she specifies P levels): . Unfortunately, I don't know how to analyze the size of such a number.--70.124.85.24 (talk) 16:22, 8 June 2008 (UTC)
- Here's a more rigorous way to define LeakeyJee's number, N, in terms of P, the number of particles in the universe:
- F(X) = XX
- N = FP(P)
- Hope that helps.--70.124.85.24 (talk) 16:29, 8 June 2008 (UTC)
- You are incorrect. Leakeyjee's number uses a left-associative power tower: ((...(P^P)^P)...)^P which is much smaller than P^^P. In fact, LeakeyJee's number is smaller than P^(P^P). If we correct LeakeyJee's description to be right-associative, like this:
- if we raised the number of particles in the universe to the power of itself, then itself to that power...
- Then we get P^^P, which is still tiny compared to 3^^^3. Owen× ☎ 22:07, 8 June 2008 (UTC)
- You are incorrect. Leakeyjee's number uses a left-associative power tower: ((...(P^P)^P)...)^P which is much smaller than P^^P. In fact, LeakeyJee's number is smaller than P^(P^P). If we correct LeakeyJee's description to be right-associative, like this:
- The answer is NO! -- your (LeakeyJee's) number is much less than even 3^^3^^4, let alone 3^^^4 or the stupendously larger starting term g_1 (3^^^^3) in the recursion for Graham's number.
- Here's a quick analysis for the number in question, which is a_P in the sequence defined by
- a_0 = P
- a_(n+1) = (a_n)^(a_n).
- To get a very conservative bound, just notice that
- (b^^a)^(b^^a) < b^^(a+2) for any integers a > 0, b > 1.
- so
- a_0 = P
- a_1 = a_0 ^ a_0 = P^P
- a_2 = a_1 ^ a_1 = (P^P)^(P^P) < P^^4
- a_3 = a_2 ^ a_2 < (P^^4)^(P^^4) < P^^6
- a_4 = a_3 ^ a_3 < (P^^6)^(P^^6) < P^^8
- ...
- a_P < P^^(2P).
- Then, since P is presumed to be less than, oh, say 10^200 (< 3^^4), we have
- a_P < 3^^(P-1)
- because
- P < 3^^4
- P^P < (3^^4)^(3^^4) < 3^^6
- P^P^P < (3^^4)^(3^^6) < 3^^8
- ...
- P^P^P^...P^P (x P's) < 3^^(2(x+1))
- ...
- P^^(2P) = P^P^P^...P^P (2P P's) < 3^^(P-1).
- Thus we have the very conservative bounds
- a_P < 3^^(P-1) < 3^^P < 3^^3^^4 < 3^^^4.
- (BTW, P < 10^186 < 3^^4 even if we imagine P to be the number of little cubes that subdivide a big cube, where the big cube has edge-length equal to something like the "diameter of the universe" and the little cubes have edge-length equal to something like the Planck length.)
- --r.e.s. (talk) 20:08, 8 June 2008 (UTC)
- Nope. The number you describe is less than , so you're not even really approaching g1, much less g64. If you can define it without multiple steps of recursion, chances are you're nowhere near G. Owen× ☎ 13:34, 8 June 2008 (UTC)
- As for the original poster's idea that "someone has to put the size of this number in words" ... As others have mentioned, there probably is no way to convey a sense of the enormous size of numbers like g_1 (3^^^^3), let alone g_64. The mathematician H. Friedman has (somewhat arbitrarily, I suppose) suggested 2^^^^5 as a benchmark "incomprehensibly large" number, 2^^^^5 being the 5th Ackermann number in a particular streamlined version of the Ackermann hierarchy. I think an effectively equivalent benchmarking can be given in terms of 3^...^3: The size of 3^^3 is considered comprehensible, the size of 3^^^3 is in a kind of gray area of comprehensibility, and 3^^^^3 is incomprehensibly large. Although 3^^^^3 may be less than 2^^^^5 (I'm not sure), imo it's effectively in the same ballpark of literally being "incomprehensibly large".
- --r.e.s. (talk) 20:56, 8 June 2008 (UTC)