Talk:Ackermann function
From Wikipedia, the free encyclopedia
[edit] Error? Not.
This article has a mistake somewhere in the definitions. I wrote a C program based on what is here, and it simply goes in an endless loop for m = 4 (A(4,n)). It does not terminate, as the definition states. --Gesslein 16:58, 4 October 2006 (UTC)
- Nevermind. My computer is just really, really slow. It works now and I verified all the numbers below. --Gesslein 17:18, 4 October 2006 (UTC)
Here's a few values of the Ackermann function. Not so surprisingly, my program didn't get any further.
A(0,0) = 1 A(0,1) = 2 A(0,2) = 3 A(0,3) = 4 A(0,4) = 5 A(0,5) = 6 A(0,6) = 7 A(0,7) = 8 A(0,8) = 9 A(0,9) = 10
A(1,0) = 2 A(1,1) = 3 A(1,2) = 4 A(1,3) = 5 A(1,4) = 6 A(1,5) = 7 A(1,6) = 8 A(1,7) = 9 A(1,8) = 10 A(1,9) = 11
A(2,0) = 3 A(2,1) = 5 A(2,2) = 7 A(2,3) = 9 A(2,4) = 11 A(2,5) = 13 A(2,6) = 15 A(2,7) = 17 A(2,8) = 19 A(2,9) = 21
A(3,0) = 5 A(3,1) = 13 A(3,2) = 29 A(3,3) = 61 A(3,4) = 125 A(3,5) = 253 A(3,6) = 509 A(3,7) = 1021 A(3,8) = 2045 A(3,9) = 4093
A(4,0) = 13 A(4,1) = 65533 (Didn't use program for this one, but should be right.)
Κσυπ Cyp 17:07, 18 Oct 2003 (UTC)
Hi, NIST gives a different version [1] of this function, which seems incompatible with the veresion given here. Are they just plain wrong? -- Pakaran 03:34, 25 Nov 2003 (UTC)
- Perhaps they are equivalent? Dysprosia 03:40, 25 Nov 2003 (UTC)
-
- I don't think so, not even if you decrease the arguments in our version. In particular, their definition gives perfect powers of two, where ours gives 3 less than the powers (e.g. wiki_ack(3,3) = 61 = 26-3, while theirs never takes on that value). Their version does snip off the uninteresting bottom few rows of the table, which may not be a bad idea, but it's not the "true" ackermann function. It may or not be asymptotically equal to that function with some change of arguments, I'm not sure on that point. -- Pakaran 03:58, 25 Nov 2003 (UTC)
- I've seen quite a few different functions called Ackermann functions, with anything from one through three arguments; about all they have in common was that they are easy-to-define recursive functions which grow too fast to be primitive recursive. I suggest we add a paragraph on these sort of 'variants' of the Ackermann function. 4pq1injbok 23:07, 24 Sep 2004 (UTC)
Please see section 14 below.
One of the table entries looks messed up under IE. Can someone with more knowledge of the pampering of IE fix this? Thanks. Pakaran. 20:55, 27 Feb 2004 (UTC)
It may be noted that A(4, 2), which appears as a decimal expansion in several web pages, cannot possibly be computed by recursive application of the Ackermann function.
How, then? --Fibonacci 21:06, 31 Jul 2004 (UTC)
- By using formulas for the A(0,n) through A(3,n), one shortcuts the recursion. If a program had to actually thread through the recursion, it would require way way too many steps.
- -- Rpresser 21:07, 2004 Sep 1 (UTC)
[edit] Pseudocode to Python...
Did anybody notice that except for the absence of a few ':' characters, that pseudo code is valid Python. --83.146.34.206 06:23, 24 Sep 2004 (UTC)
- Well... I wrote it and designed the language and I don't even know Python. So go figure. I'm sure there are some other differences, if only the emboldened keywords and the ≠ symbol. Derrick Coetzee 23:52, 1 Oct 2004 (UTC)
[edit] Inverse
The inverse of the function f is less than 4 for any conceivable input size, so for practical algorithm analysis, it can be regarded as a constant.
- I take issue with the first half of the sentence. It's kind of imprecise. It should at least give the order of the number at which it changes to 5. Also there needs to be some mention of a practical use - the union-find data structure. Deepak 14:35, 24 Sep 2004 (UTC)
-
- That's not something I fully understand myself - it was based on a single paragraph in one of my college textbooks. I don't think we have an article on union-find in any event. F becomes 4 at A(4,4) which has a number of digits whose own number of digits cannot be written in the physical universe, correct me if I'm wrong. It becomes 5 at a number which is far bigger than that. The point is a real computer running union-find on an input with only one element per proton in the universe will have no problem, nor if it first expands each proton to a universe the size of our own a fair number of times. This would probably require an environmental impact statement or something :). Pakaran. 16:24, 24 Sep 2004 (UTC)
- Indeed. I remember what Aho-Ulmann had to say "\alpha [the inverse] eventually reaches 4, then 5 and so on in its unimaginably slow crawl to infinity." Let's at least write your description - "number of digits whose own digits cannot be written in the physical universe" (to be pendantic, we should say in base-10). That should be suffficiently impressive. -- Deepak
- A(4, 4) is A(3, A(4, 3)), or A(3, A(3, A(4, 2))) - which is to say 2 to the power of a number which is itself 2 to the power of a number which would require a good fraction of a day to read aloud. A(5, 5) is a horse of another color. The practicalities of storing an input the size of either in a PC's address space are left as an exercise. Feel free to add something along the lines we've discussed to the article. Pakaran. 17:50, 24 Sep 2004 (UTC)
- Indeed. I remember what Aho-Ulmann had to say "\alpha [the inverse] eventually reaches 4, then 5 and so on in its unimaginably slow crawl to infinity." Let's at least write your description - "number of digits whose own digits cannot be written in the physical universe" (to be pendantic, we should say in base-10). That should be suffficiently impressive. -- Deepak
- That's not something I fully understand myself - it was based on a single paragraph in one of my college textbooks. I don't think we have an article on union-find in any event. F becomes 4 at A(4,4) which has a number of digits whose own number of digits cannot be written in the physical universe, correct me if I'm wrong. It becomes 5 at a number which is far bigger than that. The point is a real computer running union-find on an input with only one element per proton in the universe will have no problem, nor if it first expands each proton to a universe the size of our own a fair number of times. This would probably require an environmental impact statement or something :). Pakaran. 16:24, 24 Sep 2004 (UTC)
[edit] Introduction
Just a note: in the introduction it states, "In mathematics and computer science, the Ackermann function (also known as Ackermann-Peter function) is an important example in the theory of computation.", but doesn't say what the Ackermann function is an important example of. -Seth Mahoney 22:06, Sep 24, 2004 (UTC)
I think it is supposed to be an example of "A function of two parameters whose value grows very fast" according to [2]. Kirbytime 02:17, 24 March 2006 (UTC)
[edit] Intuitive meaning
If I understood the Aaronson article (under "References") correctly, the Ackermann function extends the line of operations "addition, multiplication, exponential" to "tetration, pentation" and so forth. So you have:
A(1) = 1 + 1 A(2) = 2 * 2 A(3) = 3 ^ 3 = (3 * 3) * 3 A(4) = 4 tetrated 4 = ((4 ^ 4) ^ 4) ^ 4 A(5) = 5 pentated 5 = (((5 tetrated 5) tetrated 5) tetrated 5) tetrated 5 ...
The definition of the function in the Wikipedia article seems to be different, but the "Explicit description" section says something about "[...] iterated exponentiation, iteration of this operation and so on", so it sounds like the essence is the same in the two definitions.
I just stumbled across this article because it was featured, so I'm not a mathematician and thus don't really know what I'm talking about here! However, if the Ackermann function really has an as simple intuitive meaning as this, maybe there should be a table showing this, like the one I just wrote.
[edit] Alternative definition
At the end of the section about Inverse we can read:
- In particular, some modified functions simplify the expression by eliminating the −3 and similar terms.
Here's an example of a modified Ackermann function which simplifies the explicit formulas for each level in the hierarchy. This function is defined for positive integers m,n both starting at 1 instead of 0:
This function meets:
A(1,n) = 1 + 1 + ... (n 1's) ... + 1 + 2 = 2+n A(2,n) = 2 + 2 + ... (n 2's) ... + 2 = 2*n A(3,n) = 2 * 2 * ... (n 2's) ... * 2 = 2^n A(4,n) = 2 ^ 2 ^ ... (n 2's) ... ^ 2 = 2 tetrated n (powers are evaluated right-to-left as usual and so should the rest of operators; I borrowed the terms "tetrated", "pentated", etc. from "Intuitive meaning" above) A(5,n) = 2 tetrated 2 tetrated 2 ... (n 2's) ... tetrated 2 = 2 pentated n A(6,n) = 2 pentated 2 pentated 2 ... (n 2's) ... pentated 2 = 2 hexated n etc.
In particular, A(m,2) = 4 for all m, which implies that A(m, 3) = A(m-1, 4) for m > 1.
I think that the hierarchy is more easily understood this way.
-PGimeno 18:37, 26 Sep 2004 (UTC)
- :PGimeno, do you have a reference to the modified Ackermann functions? Why do you use the same character A? dima (talk) 13:27, 3 March 2008 (UTC)
[edit] mn
The explanation of mn as "m multiplied by itself n times" is not quite true. For example, it implies that m1 means "m multiplied by itself once". I can't see an obvious way to fix this. Rewriting it as "1 multiplied by m n times" would be correct but maybe a bit obscure to the layman. Snottygobble 01:15, 20 Dec 2004 (UTC)
- I consider this to be one of those cases where being a little wrong in a few edge cases for the purpose of simplicity is okay. But I'll try and stick in a word or two as a reminder. Deco 03:29, 5 Feb 2005 (UTC)
- Why would this be incorrect? The empty product is 1, so I think the original explanation is still correct. 63.96.95.133 23:43, 15 May 2007 (UTC)
-
-
- A better wording would be "the product of n copies of m". JRSpriggs 07:49, 16 May 2007 (UTC)
-
[edit] Possible vandalism - please check
User 213.94.207.61 made a change months ago: here.
His other edits were subtle vandalisms - he changed dates a bit and these changes didn't get caught for very long time. (all his edits)
Can someone check whether the change in Ackermann function above is or isn't valid? Pavel Vozenilek 12:58, 26 Mar 2005 (UTC)
It's redundant, m is always greater than 0 there. I'll remove it. Thank you for pointing it out. RJFJR 13:45, Mar 26, 2005 (UTC)
[edit] The Ackermann Recursive Function Antinomy
The Ackermann’s ‘function’ hardly counts as a function --- its array or table of values is merely a perverse pathological maze. Indeed, it blatantly demonstrates that the clearly ill-defined Ackermann’s function is an antinomy or self-contradiction paradox just like:
(1) Cantor’s diagonal argument [the sequence of diagonal terms is actually an inherent list-inclusion-and-imposition-of-order condition for the row-listing so that the anti-diagonal-terms expression is indubitably excluded from the row-list]; or
(2) Cantor’s bone-of-contention “set of all the elements of an infinite set which are not contained in their respective images for a presumed one-to-one correspondence between the elements of the given infinite set and the members of its power set [see Wikipedia article and discussion on Cantor’s Theorem]; or
(3) the ‘liar’ paradox [“This statement is false” — which is both true and false at the same time].
The following is a very simple counter-argument to the generally accepted claim that the Ackermann’s function is recursive but not primitive recursive:
(1) Clearly, the first row of the Ackermann’s function values table or array already enumerates the natural numbers — A(0,n) = n + 1 — or, recursively: A(0,0) = 1, A(0,n) = A(0,n-1) + 1.
(2) The Ackermann’s function has N x N as domain and N+ as range, where N is the set of all natural numbers and N+ is the set of all positive natural numbers. Therefore, for all natural numbers m > 0 and n ³ 0, A(m,n) = z + 1 = A(0,z) where z is some natural number (which can be readily obtained from a non-recursive definition of the Ackermann’s function — say, using the hyper operator). That is:
A(m,n) = A(0,z) = A(0,A(0,z-1)) = A(0,A(0,A(0,z-2))) = A(0,A(0,A(0,A(0,z-3)))) . . . = A(0,A(0,A(0,A(0, . . . ,A(0,0)))))
— which is the standard primitive recursive successor function on the natural numbers. In other words, every Ackermann’s function value is primitive recursively and not primitive recursively defined at the same time in the same respect — hence, its very definition indeed violates the principle of non-contradiction so it is untenable ab initio.
For example, the following is a primitive recursive definition of the Ackermann’s function:
A(m,n) = 1 if m = 0, n = 0
= A(0,n-1) + 1 if m = 0, n > 0
= A(0,hyper(2,m,n+3)-4) otherwise
where the hyper operator [see Wikipedia article on “Hyper operator”] for m > 0 is defined as:
hyper(a,1,b) = a + b
hyper(a,2,b) = a x b
hyper(a,3,b) = a ^ b = ab
hyper(a,4,b) = a ^^ b = ba [see Wikipedia article on “Tetration”]
. . .
hyper(a,m,b) = a ^m-2 b
(3) Therefore, to avoid the self-contradiction, one should define the first row Ackermann’s function values from an already known recursive but not primitive recursive function — say, A(0,0) = a > 1 and, for n > 0, A(0,n) = f(A(0,n-1)) — for example, a = 10100 and f(A(0,n-1)) = A(0,n-1)A(0,n-1) — but this undercuts Ackermann’s function’s posturing as the simplest recursive but not primitive recursive function.
It must be noted that all the Ackermann function (indeed, a many-to-one relation) values do not follow a recursive single sequence — that is, they cannot be enumerated such that the later values are obtained only from earlier values in the sequence.
Moreover, the abstract algebraic commutative ring of integers or field of rational or real numbers has only addition and multiplication (as well as their respective inverse) operations initially defined. Exponentiation (that is, iterated multiplication) is also well-defined over these number systems but tetration (see Wikipedia article) or non-standard iterated exponentiation (the exponentiation is done at the deepest level first — that is, right associative operation) is not well-defined (it violates the laws of exponents for these standard number systems — that is, its operation is not properly derived from addition and multiplication). The fast growth of the Ackermann’s function values beginning with the A(4,n) row is attributable to the fact that their definition involve non-standard iterated exponentiation which belongs to a different number system than where primitive recursive functions are defined.
BenCawaling@Yahoo.com [25 April 2005]
- Hooray, a crank! Here's why you're wrong:
- The Ackermann function is a function, if you define a function (like every other mathematician) to be a set of pairs. It has a procedure for computing it which provably always halts, which is enough to prove that the function exists and is unique.
- It's not primitive recursive, and that's not "generally believed", that's a proven theorem - therefore any "counterargument" you offer is fallacious before I read it. The fact that you can expand it in terms of its own values in ways other than the recursive definition is true but irrelevent.
- Cantor's diagonal argument is also not a paradox.
- Deco 08:01, 25 May 2005 (UTC)
[edit] Definition and properties - "more concise"
The text says:
I'm not convinced these add anything: the Haskell example is identical to the function pseudocode, except that all the verbose keywords get swallowed up by the ease of defining functions in Haskell, and the use of multimethods to shift the burden of argument testing from the code to the compiler. The Scheme example seems a direct copy of the pseudocode except that it swaps the names of the two arguments (and again minimizes the need for keywords).
This is not a Perl golf competition - do these examples really contribute to the article at all? Hv 15:47, 13 August 2005 (UTC)
- I agree. The Haskell one is elegant, but it's cryptic to people who don't know Haskell (read: 99% of readers), and they really disrupt the flow of the introduction, and which really has enough junk already. I removed them. Deco 21:30, 14 August 2005 (UTC)
- I'm familiar with both the languages, but I think that the Haskell version should be left in. It's so consise and elegant. If you can understand the wikipseudocode, you would be able to understand the haskell notation of the function. I would say it were even easier to understand then the wiki one. I think you should keep the Scheme version out though, whereas it is easy to follow, it is more crytic than the original. - Hahnchen 03:58, 22 August 2005 (UTC)
- Okay, that's fine. One can at least say that the Haskell version doesn't cost much in terms of space. But let's not expand it beyond that. If necessary, the partially iterative pseudocode can be removed also. Deco 05:22, 22 August 2005 (UTC)
- I'm familiar with both the languages, but I think that the Haskell version should be left in. It's so consise and elegant. If you can understand the wikipseudocode, you would be able to understand the haskell notation of the function. I would say it were even easier to understand then the wiki one. I think you should keep the Scheme version out though, whereas it is easy to follow, it is more crytic than the original. - Hahnchen 03:58, 22 August 2005 (UTC)
-
-
-
- OK, I have restored the Haskell version and removed the partially iterative pseudocode version. —Ashley Y 23:06, 3 January 2006 (UTC)
-
-
"One surprising aspect of the Ackermann function is that the only arithmetic operations it ever uses are addition and subtraction of 1" If you define each term m+1 in terms of m and n, then the function is defined entirely in terms of addition... 137.205.8.2 10:25, 15 March 2007 (UTC)
[edit] First non-PR
The article says "...simple example of a recursive function that is not primitively recursive." It was the first such example known, wasn't it? (IIRC from class 20 years ago) Bubba73 (talk), 03:41, 8 December 2005 (UTC)
- Nope. As the article explains, the little-known, independently-discovered Sudan function slightly predates it (at least if you trust my source). Other such functions were published before that but never proven to be non-primitive-recursive until later. Deco 03:45, 8 December 2005 (UTC)
-
- There's no article on the Sudan function. RJFJR 21:34, 8 December 2005 (UTC)
-
-
- This is a true fact, probably because it's so little-known. Anyone with access to the original paper by Sudan can write it though. Deco 22:56, 8 December 2005 (UTC)
-
[edit] Gabriel Sudan invented first this function
I read in the following web site: http://mathforum.org/epigone/math-history-list/smixsmeeblu
Subject: Ackermann vs. Sudan function: priority of discovery [was:Ackermann - or Budan or ?] Author: Bill Dubuque <wgd@martigny.ai.mit.edu> Organization: MIT Date: Fri, 12 Sep 1997 08:09:48 -0400
The following message is a courtesy copy of an article that has been posted as well.
mac@abacus.concordia.ca ( JOHN MCKAY ) writes: | | Can someone assess who originated the so-called "Ackermann fn" ? | It appears it may not be Ackermann.
Cristian Calude has written a number of papers on the history of the Ackermann and Sudan functions, e.g. see
Calude, Cristian; Marcus, Solomon; \cedla Tevy, Ionel The first example of a recursive function which is not primitive recursive. Historia Math. 6 (1979), no. 4, 380--384. MR 80i:03053 03D20 01A60
Chronologically, Sudan's function is the first example of a recursive but not primitive recursive function (Bull. Math. Soc. Roumaine Sci. 30 (1927), 11 - 30; Jbuch 53, 171). Ackermann's work was published slightly later (Math. Ann. 99 (1928), 118 - 133; Jbuch 54, 56). Both were students of Hilbert, and were working on a problem posed by Hilbert, and were acquainted with each other's work. Sudan's function extends to ordinals and majorizes Ackermann's function (except at a single point). As Smorynski says in his book Logical Number Theory
independently, to two of Hilbert's students, Wilhelm Ackermann and Gabriel Sudan. Although they gave essentially the same recursion, Sudan worked with functions of transfinite ordinals and Ackermann with functions of natural numbers, whence Hilbert cited Ackermann and Ackermann's is the name associated with the resulting functions.
The paper cited above also has speculations as to why Hilbert and Bernays did not mention Sudan's construction.
According to MR 82k:03061, Sudan's function is as follows
F (x, y) = x+y 0
F (x, 0) = x n+1
F (x, y+1) = F ( F (x, y), F (x, y)+y+1 ) n+1 n n+1 n+1
[edit] Sub expression within table
In the table, each line has an entry of the form f(2,(n+3)) - 3 in the n column. Whilst is is a "fudge" to get the right answer, would it be inappropriate to write the first line as 2^0 + (n+3) - 3 which evaluates to the present n+1? -- SGBailey 12:29, 29 January 2006 (UTC)
[edit] physical universe
"In fact, α(n) is less than 5 for any conceivable input size n, since A(4, 4) has a number of digits that cannot itself be written in binary in the physical universe.
- This sentence is one of at least seven comparisons to the size, in number of elementary particles, of the physical universe. I realize the size of the universe can be used as a bound on what is computable, based on memory requirements, by any "real", physical computer, or as a bound on what is expressible in some particular formal language. But beyond that, is the size of the universe just for analogy, to express how impressive the rate of growth of the function is, or does it actually have some bearing in mathematics?
- On a related note, what is meant by "conceivable" in the above sentence? I don't think I've ever heard of an entity that is definately not conceivable, although I'd imagine there is some paradox out there formulated specifically for this purpose. Formally, is the range of a(n) actually bounded above by 5? How can that be, if f(n) = A(n,n) is defined for all natural numbers?
- I would suggest changing "for any conceivable input size" to "for any input size that is scribable in decimal notation." Anyway, I enjoyed the article very much. Thanks a lot! --Rwehr 20:06, 15 April 2006 (UTC)
- Please go ahead and edit the page. Kaimiddleton 21:02, 15 April 2006 (UTC)
- Also, has anyone done any work on the memory requirements of Ack? It would seem that if you have a smart compiler, that does save the intermediate values, those memory requirements also get pretty large. Looks like a space-time tradeoff to me. It would be interesting to see any papers on this, if anyone is aware of them. That info should be included in the article. Sword 05:04, 15 July 2006 (UTC)
- Please go ahead and edit the page. Kaimiddleton 21:02, 15 April 2006 (UTC)
[edit] Definitions and "versions"
The Wikipedia article on Ackermann's function does not have ackermann's function as he originally defined it. The NIST's DADS website, also has an article on Ackermann's function. Unfortunately, it too does not show Ackermann's original definition, but it does have two links to the "versions" of Ackermann's function under the "more info" section. The link to Cowles and Bailey's (hereafter C&B) webpage seems to be the more informative of the two (Munafo's is just the math definitions without the history.) C&B's research looks like its been directly taken from a BBS article, so you'll have to ignore the headers. C&B's webpage shows that the current definitions on the NIST website, MathWorld, and Wikipedia are all R. Peter's 1935 version of Ackermann's function.
Ackermann's original function seems to be a lot more general than the other two variable functions that are presented on C&B's website. It is a three variable function which maps the common addition, multiplication, exponentiation, tetration, ... to a whole (including 0) number, which is the first parameter. It, of course, has the following two whole number input parameters that R. Peter's and the other two variable functions take in as its second and third parameters. This would make it use Polish notation, if the parameters were written and used from the left to right. Also, Herbert's version seems to be the most refined, and I would suggest using it to introduce the reader to Ackermann's function.
[edit] Reference links in History section
Several links in the History section take the form '#vonHeijenoort|something', taking the user to the list of external references. Is it better to link to the wikipedia article if we have one? I changed one link (Hilbert's program) before realising there were several, so thought I'd ask! --- Richard CHSTalk 12:48, 26 June 2006 (UTC)
[edit] Extending to transfinite ordinals
One can extend the Ackermann function to the transfinite (for ordinals which have a distinguished fundamental sequence) as follows:
- A (0, n) = n+1.
- A (α+1, 0) = A (α, 1).
- A (α+1, n+1) = A (α, A (α+1, n)).
If λ is a limit ordinal with the fundamental sequence which takes k to λk, we let:
- A (λ, n) = A (λn+1, n+1).
See Large countable ordinals#Fundamental sequences for the Veblen hierarchy. JRSpriggs 09:22, 20 August 2006 (UTC)
- Let me clarify that it is only the first argument of the Ackermann function which is being extended to infinite ordinals. The second argument and the value are still limited to natural numbers. However, I think (but have not yet proved) that by using this extention to ordinals the Ackermann function can surpass the Conway chained arrow notation as a means of naming large natural numbers. JRSpriggs 04:46, 21 August 2006 (UTC)
- Specifically, A (ωω+1, n) should rise faster as a function of n than any sequence of Conway chained arrow notations which has a relatively simple definition, such as n→n→...→n with n copies of n. JRSpriggs 04:55, 21 August 2006 (UTC)
[edit] Please do not compare large numbers to the physical world
People who do not have something useful to say about this article seem obsessed to insert some comparison between large numbers and the "number of atoms in the universe" with perhaps some silly function applied. Please stop. It might be entertaining, but it is not appropriate encyclopedic information. --Farever 23:08, 2 February 2007 (UTC)
- Er, I think this is actually a useful way of demonstrating that the values taken on by the function are very large, and more importantly, that computations taking resources proportional to these numbers become rapidly impractical. I don't think it's silly. Dcoetzee 22:24, 20 July 2007 (UTC)
[edit] A(5, 1) in the table
I note that Susan Stepney's table chooses to always list A(5,1) as A(4,65533), thus the term appears four times in her table. Our current table instead uses A(5,1) three times, which I think is distracting because it breaks the pattern in that is otherwise clear in row 6. Another possiblity is using A(5, A(6, 0)) as the result for A(6, 1). Do you guys have a preference?--199.33.32.40 01:35, 3 February 2007 (UTC)
[edit] GA?
This is feeling like it might be a GA once again. I will poll some of the Math WikiProject guys and get some feedback. Then maybe some PR and on to GAN!--70.231.149.0 02:15, 4 February 2007 (UTC)
[edit] Corrected A(5,n) and A(6,n)
I hope it's correct now. It was false before.
[edit] Inverse Ackermann function
I added two external links on the inverse Ackermann function (one to a page of mine, one to a PDF presentation by R. Seidel). The inverse Ackermann function should get its own wikipedia page. Gabn1 09:32, 22 February 2007 (UTC)
[edit] necessary comments?
"(this presentation is due to Rózsa Péter):"
Is this really important? No offense to anyone, but I think Rózsa Péter should be credited only as a scientist, not a Wikipedia editor (maybe on her own page, but this one seems a bit excessive...though not quite vanity, but perhaps self-promotion...) —The preceding unsigned comment was added by 128.32.42.26 (talk) 23:10, 27 February 2007 (UTC).
- The reference is not to a WikiPedia user, but to the mathematician that invented that form of the function - hence the alternative name "Ackermann-Péter function" in the first paragraph. I'm sure the wording could be improved to make that clearer, but I'm not sure exactly how. Hv 02:21, 1 March 2007 (UTC)
[edit] Termination of function
The explanation of why this function always terminates is confusing in my opinion. At least, it does not succinctly convince the reader that the function terminates. It states:
"The recursion is bounded because in each recursive application either m decreases, or m remains the same and n decreases. Each time that n reaches zero, m decreases, so m eventually reaches zero as well. (Expressed more technically, in each case the pair (m, n) decreases in the lexicographic order, which preserves the well-ordering of the non-negative integers.) However, when m decreases there is no upper bound on how much n can increase — and it will often increase greatly."
I don't think that the "more technical" explanation needs to be there as it interrupts the explanation. Furthermore, the "m decreases, so m eventually reaches zero" part is followed by the part stating that if m decreases there is no upper bound on how much n will increase, so it wouldn't follow that m eventually reaches zero without further inspection. I can't offer a better explanation, but I think it could be done. —The preceding unsigned comment was added by Sam Brightman (talk • contribs) 15:58, 2 April 2007 (UTC).
[edit] Importance (or priority) of Ackermann function
I disagree with the down-grading of the Ackermann function from "mid" importance to "low" importance. It is quite important in understanding the distinction between primitive recursive functions and total recursive functions generally. See Primitive recursive function#Relationship to recursive functions, especially the last sentence which says "... a function is primitive recursive if and only if there is a natural number m such that the function can be computed by a Turing machine that always halts within A(m,n) or fewer steps, where n is the sum of the arguments of the primitive recursive function.". JRSpriggs 10:13, 10 June 2007 (UTC)
- It was definitely a borderline decision, and can of course be changed: there is always a bit of subjectivity in this kind of thing. I did appreciate the fact that the Ackermann function is important in mathematical logic as an example of a recursive function which is not primitive recursive, although I wasn't aware of the more subtle point which you quote. It would be nice to mention this fact in this article, which at the moment doesn't seem to bring out the mathematical logic aspects as well as it might. Note though, that "Low-priority" is not intended to be at all dismissive: for comparison, Area of a disk is also Low-priority, and this appears to be accepted by editors. See Wikipedia:WikiProject Mathematics/Wikipedia 1.0/Assessment and Wikipedia:WikiProject Mathematics/Wikipedia 1.0/Importance for more information. Geometry guy 10:46, 10 June 2007 (UTC)
[edit] relation to primitive recursive functions
We need some citations and more information on the relationship between Ackermann's function and primitive recursive functions. David.Monniaux 03:55, 17 June 2007 (UTC)
[edit] 4th Ackermann number
I fixed the entry for the fourth Ackermann number some time ago, and it got reverted by someone (probably carelessly). I'm placing the incorrect text here:
An attempt to express the fourth Ackermann number, 44, using iterated exponentiation is not practical but it can be expressed using tetration in three nested layers as shown below. Explanation: in the middle layer, there is a tower of tetration whose full length is (already a large number) and the final result is the top layer of tetrated 4's whose length equals the calculation of the middle layer.
The following contains the correct expressions: notice that it actually uses tetration as claimed:
An attempt to express the fourth Ackermann number, 44, using iterated exponentiation as above would become extremely complicated. However, it can be expressed using tetration in three nested layers as shown below. Explanation: in the middle layer, there is a tower of tetration whose full length is and the final result is the top layer of tetrated 4's whose full length equals the calculation of the middle layer. Note that by way of size comparison, the simple expression already exceeds a googolplex, so the fourth Ackermann number is quite large.
Sorry I can't remember my password to log in and sign this. I'll sign it later. Ok: signed by my logged-in ID now: Kaimiddleton 21:37, 20 July 2007 (UTC)
[edit] Metaprogramming in C++ example bad
Hello. I just removed a C++ metaprogramming example. It was designed in vain - the templates are highly ambiguous. Here is a transcript of the contents:
template<int x, int y> struct ackermann { enum { v = ackermann<x-1, ackermann<x, y-1>::v>::v } ; } ; template< int y> struct ackermann<0, y> { enum { v = y+1 } ; } ; template<int x > struct ackermann<x, 0> { enum { v = ackermann<x-1, 1>::v } ; } ; template< > struct ackermann<0, 0> { enum { v = 1 } ; } ;
The second and third are equivalent. In a compiler that ignores the name, you'd wind up with a conflict. It doesn't work with GCC (at least in my test) - PGSONIC 00:59, 17 August 2007 (UTC)
[edit] Remove most of the implementations?
I don't see a reason to show the function implemented in a dozen computer languages. It doesn't do anything to add to the understanding of the function and is only of cursory relevance to the article. In particular "Non-Recursive Ackermann in C++" seems more like a geek-badge than encyclopedic content.
There is no point in showing a recursive function can be implemented with a stack as that is how recursion is --Ott2 18:32, 21 September 2007 (UTC)implemented in computer languages anyway.
Interestingly although "The first implementation in a programming language was in Fortran in 1964, see H. Gordon --Ott2 18:32, 21 September 2007 (UTC)Rice[13], Recursion and iteration, Commun. ACM, 8(2), 1965, pp. 114--115.)", no Fortran implementation is even given (let alone the first). If any computer implementation of the function was relevant to the article it would be that one and it isn't even provided!
My recommendation is to remove the section or to have only the C definition. 69.121.109.152 19:58, 24 August 2007 (UTC)
- I agree that having several implementations doesn't add much content, and also it may encourage to add other implementations. IMHO a good choice will be to quote Rice's Fortran original implementation or a pseudocode version and remove all other implementations. I don't know if there's a rule that says articles should not be contests??.Ismaelbej 17:57, 18 September 2007 (UTC)
-
- Agreed. The various code examples clutter up the page, and are largely irrelevant anyway -- the whole point is that this function _can_ be computed but it isn't feasible to do so except for a tiny subset of inputs. --Ott2 18:32, 21 September 2007 (UTC)
[edit] Graham's number
Hello - I just came by this article and noticed a statement that I don't believe is correct (underlined):
- Despite the large values occurring in this early section of the table, some even larger numbers have been defined, such as Graham's number, which cannot be written with any small number of Knuth arrows. This number is constructed with a technique similar to applying the Ackermann function to itself recursively.
To my knowledge, iteratively repeated power towers as used in the definition of Graham's number are expressible with Ackermann functions and 5 as the first argument: First arguments 3 are in the order of powers, 4 is in the order of power towers, and 5 in the order of repeated power towers. So I would expect Graham's number to be something like A(5, n) with n something larger than 64 (but not much larger; Ackermann function is in the order of powers of 2, whereas Graham's number is defined through powers of 3). Therefore, Grahams number could not be considered an "even larger number" as stated in the article, but upper bounds can be expressed through the Ackermann function with reasonably small arguments. Comments? Thanks, Jens Koeplinger 13:28, 27 August 2007 (UTC)
- A(5, n) can be expressed with 3 upward arrows, while Graham's number needs very many.--Patrick 14:08, 27 August 2007 (UTC)
-
- Since Ackermann functions are to base 2 and Graham's Number is base 3, it is very difficult to express one form in the other, obviously. However, nested Ackermann operations are roughly equal to the usual Graham definition. for example- A(A(A(....A(1,1)...) with 68 A's is larger than Graham's Number. Mytg8 15:01, 27 August 2007 (UTC)
-
-
- Got it, the first argument of the Ackermann function is nested, so we can't write it down with two reasonably small arguments. Thanks both, Jens Koeplinger 16:56, 27 August 2007 (UTC)
-
-
-
- My bad-I should have used the Ackermann numbers,ie, A(k)=A(k,k). So A(A(A(....A(1))...) with 68 A's is the correct form. And to think Friedman's n(4) uses A(187196,187196) A's! Urr, A(187196) A's. One or the other.:) A tad bigger the Graham's, y'think? Mytg8 17:40, 27 August 2007 (UTC)
-
[edit] Obscureness
It was already said that there are too many implementations. Thus I deleted those implementations that I considered obscure. So far so good. The Matlab implementation was promptly readded. But: Matlab is just some CAS, right? I don't see its outstanding importance, least that the Ackermann function needs to be explained in terms of this "language". (I would also be happy to see even less implementations but couldn't decide on a language.) --53.122.197.194 (talk) 13:57, 26 November 2007 (UTC)
[edit] C Program
Correct me if I'm wrong, but couldn't that program be int ackermann(int m, int n) {
if (m == 0) return n+1; if (n == 0) return ackermann(m-1, 1); return ackermann(m-1, ackermann(m, n-1));
} ?
Considering that it's 'written' in C and that after a return statement, it wouldn't evaluate the rest? At the very least, I would remove the "m>0" parts, because presumably if m is not zero, it is greater than zero. Sorry, can't log in account right now. Gremlinman. —Preceding unsigned comment added by 24.218.46.235 (talk) 00:05, 5 December 2007 (UTC)
- unsigned int limits the input range to zero or positive integers, which is logical. But this C program is a joke nonetheless, totally useless even for the VAST majority of "conceivable" numbers. It seems to me like the example program is there to make the function "conceivable" also for programmers, which is a good thing. But pseudocode would be more accurate, as the limits of the language would not be so obvious. Hmm. There exist bignum libraries for C, which can tackle numbers of arbitrary size, thus letting the program rather be limited only by outer factors (including memory, time, stack size and so on), but I realize that a program using such libraries would not be as short and simple as this one, and could possible even waste more people's time because they are tempted to test a program which looks more serious... —Preceding unsigned comment added by 129.240.72.42 (talk) 15:28, 11 April 2008 (UTC)
[edit] Michie memoization reference
There is a citation needed note on the claim that Donald Michie invented memoization. There are citations on both his page Donald Michie and the Memoization pages. I'm not sure whether to copy that reference here, or if it would be better to remove the claim here since there isn't anything Ackermann-specific about it, and if someone wants to go look at the memoization page they'll see the claim right at the top (and the citation).
Or could the call for citation be about the claim that compilers can do memoization automatically? Certainly they can (Haskell for example [3], among plenty of others). But again, it seems to me that such details belong on the Memoization page.
--JamesPaulWhite (talk) 06:31, 13 April 2008 (UTC)