Talk:Hyper operator
From Wikipedia, the free encyclopedia
This article is very muddy to me. It needs some love. -- imbaczek
- It may be helpful if you can define "love" mathematically. OmegaMan
The definition previous given was incomplete. It defines hyper(a,n,b)=hyper(a,n-1,hyper(a,n,b-1)) without ever defining a base case to which hyper(a,n,b-1) can reduce. This C++ code demostrates why the definition without the stipulation that hyper(a,n,0)=1 for n>1 is invalid. The deletion of the lines containing "if(b==0) return 1;" results in infinite recursion.
template<class T> T Hyper(T a, T n, T b) { switch(n) { case 1: return a+b; // intentionally grouped with also infrequent n=2 branch case 2: return a*b; case 3: return IntegerPow(a,b); // intentionally grouped with also common n>3 branch default: if(b==0) return 1; return Hyper( a , n-1 , Hyper(a,n,b-1) ); } }
130.39.153.46 04:17, 25 Sep 2004 (UTC)
All of the relevant content of this page should be dumped into or merged with the relevant content of a page by the better-known name of "tetration". In any case, the continuing existence of two seperate pages to describe the fourth binary operation is redundant and unnecessary. -OmegaMan
Why is there an if statement in the default? Why not just add a case 0 line? He Who Is 20:35, 2 June 2006 (UTC)
- Notice the switch statement is checking n, but the if statement is checking b. --72.140.146.246 23:43, 6 June 2006 (UTC)
Contents |
[edit] Cool Funtions
We should investigate funtions f(x) such that f(hyper(a,b,n))=hyper(f(a),b,n-1). Such funtions would allow definition of hyper with real numbers by f-1(hyper(f(a),b,n-1)).--SurrealWarrior
That sounds good, and I've recently been working on the same thing. So far I've defined it over all integers n. In the article on Knuth arrow notation, it is mentioned that a(n)b = a(n − 1)a(n)(b − 1). From that I've found for all even negative n, it is simply b+1, and all odd negative n give simply b.That gave me, so far:
Where O and E are the sets of all positive odd and even numbers, respectively. This aside, I've made no progress yet to generalizing over all reals, or complex
He Who Is 01:00, 23 April 2006 (UTC)
B.T.W., OmegaMan, I think it would be better merged with Knuth Up-Arrow notation or Conway Chained-Arrow Notation, as these there notations expand beyond addition, multiplication, exponentation, and tetration, including pentation, sextation, and infinitely iterated iterations therof.
[edit] Why not associative?
I would question the statement that addition and multiplication are "defined to be associative". They are defined in terms of disjoint unions of sets and Cartesian products of sets (provided the axiom of choice holds). We are just lucky that they turn out to be associative.
However, one can define an infinite sequence of binary operations on (subsets of) the real numbers which are commutative, associative, distributive over the previous operation, and have identity elements. Just let X <n-th operation> Y = expn ((lnn(X)) + (lnn(Y))). The identity will be expn(0). This will do the trick. JRSpriggs 09:07, 13 May 2006 (UTC)
I believe it can be seen as more than a coinsidence why addition and multiplication are communative, but exponentation are. Consider this: Addition can be defined and visualized as a simple combination of ranges on a number line (and it can be flipped to represent the commutativity), and multiplication can be defined and visualized as a simple creation of a plane of a shape with different side lengths (and it can be rotated 90 degrees to represent its communativity), but exponentation cannot be represented in three dimensions, but tencends dimensions. x^n can be represented as an n-dimensional shape bordered by n-unit long lines, not in keeping with the pattern created by addition and multiplication. And by the way expn(lnn) always equals x.He Who Is 01:55, 20 May 2006 (UTC)
- You misread the parentheses. That's an interesting operation that JRSpriggs posted. rspeer / ɹəədsɹ 03:48, 20 May 2006 (UTC)
- I think the parentheses were mistyped, not misread. The operation reads expn ((lnn(X)) + (lnn(Y))), which would (as presented) simplify to x + lnn(Y). Perhaps it should be expn (((lnn(X)) + (lnn(Y))))? (NB: sin a + b = (sin a) + b, yet sin a * b = sin (a * b), and presumably that also holds for exp. The problem seems be distributivity rules.)
- And what do expn and lnn mean anyway?
No... It think they were written correctly. Look at it more closely. At first I though an extra parenthese (sp?) was added to the end, because I'm more used to seeing it written ln x, and exp x, w/o parentheses.He Who Is 20:08, 2 June 2006 (UTC)
- So is it elnx + lny? Or is that not quite right? And what's the superscript n for?
Superscripts denote recursivity. f^n(x) is f acting on x n times. So that function cannot be written any other way, wsince we dont know how many times to write exp, or ln.
- Then elnx + lny would be correct for the first operation only.
-
- To remove any confusion (I hope), X <n-th operation> Y = expn {[lnn(X)] + [lnn(Y)]}. And for example, exp3 (W) = exp {exp [exp (W)]}. JRSpriggs 20:45, 3 June 2006 (UTC)
That depends on what you mean. Instead of plugging in x and y to the formula, and ignoring the exponents, then plugging in the outputs, then plugging in the outputs, then plugging in the outputs, (n times) you would first find the nth natural logarithm of x, add it to the nth natural logarithm of y, then take the nth exponential of the output. You are correct if you mean that x and y represent the n-1th ln of x and y, then yes. But I think you thought that the function was: (exp o (lnx + lny))n (x,y) (See Function Composition). Each seperated function is recursive, not the entire thing.He Who Is 21:15, 3 June 2006 (UTC)
- I am perfectly clear about what I mean. I said what I meant and I meant what I said. For anyone jumping in the middle of this, I was defining something different from the operations described in the article. Let me give another description of the first three:
- X <0> Y = X+Y
- X <1> Y = exp (ln (X) + ln (Y)) = X·Y
- X <2> Y = exp (exp (ln (ln (X)) + ln (ln (Y)))) = exp (ln (X) · ln (Y)) = Xln(Y) = Yln(X)
- more generally, X <n+1> Y = exp (ln (X) <n> ln (Y))
- I hope that this finally makes it as clear to others as it is to me. JRSpriggs 00:57, 5 June 2006 (UTC)
Sorry, I wasn't talking to you. I was talking to the same person you were. (Edit conflict after you posted. I just added mine after yours.) I meant that what he had said depended on what he meant. I should have clarified. He Who Is 01:23, 5 June 2006 (UTC)
- OK. I am sorry that I misunderstood you. I think my examples may help anyway. JRSpriggs 02:27, 5 June 2006 (UTC)
[edit] Case for n=0
Where does the definition that hyper(a,0,b) = b + 1 come from? Also, This doesn't seem to fit with the hyper1 operator being addition, hyper2 being multiplication, etc. This case for n = 0 is fine as far as I can see, but I think a case has to be added defining the operation as a + b for n = 1. --72.140.146.246 23:49, 6 June 2006 (UTC)
- To get a+b = hyper(a,1,b) = hyper(a,0,hyper(a,1,b-1)) = hyper(a,0,a+b-1), we need hyper(a,0,c) = a+b = c+1 where c = a+b-1. JRSpriggs 08:45, 7 June 2006 (UTC)
-
- Yes, that makes sense. I was using the following for the recursive branch of the function:
- where association is from right to left, as in exponentiation. This is the definition I remembered from looking at the article some time ago. Using this definition along with the case b + 1 for n = 0, I got , instead of 7. When I tried it again with the definition in the article, it worked. I'm guessing I simply misremembered the formula - perhaps it should have been b copies of n? Anyway, your explanation helped. --72.140.146.246 19:53, 7 June 2006 (UTC)
Please check the following link concerning the given problem as well:
[A_thought_about_addition[1]]
Beloturkin 09:54, 8 September 2006 (UTC)
[edit] Hypothetic Hyper Operator Extension - Real-Order Hyper Operator?
That is an idea I have been trying to investigate for years. Unfortunately, no reasonable solution is found yet. Maybe someone has already moved a bit further in this field and can share his elaboration with others?
Let's adhere to a short and convenient hyper-operator notation form: hyper (a,n,b) = a (n) b.
In these terms, a (1) b = a+b (addition); a (2) b = ab (multiplication); a (3) b = a^b (exponentiation); a (4) b = a^^b (tetration) etc.
That is quite clear how we can obtain any natural-order hyperoperator this way. The first difficulty appears when we try to extrapolate this sequence below 1; it is not so easy to define what a (0) b means. The popular version is that a (0) b operator signifies trivial increment function, however that definition results in some contradictions described in a separate note: [[2]]
Nevertheless, it would be logically to interpolate a (n) b operator for fractional orders, or, more generally, to define a (r) b for real values of r. There are some intuitive speculations concerning this extension:
1) a (int(r)) b < a (r) b < a (int(r)+1) b, when a>2, b>2, and r>0, examples:
3+5 < 3 (1.5) 5 < 3*5 , i.e. 8 < 3(1.5)5 < 15;
4*3 < 4 (2.2) 3 < 4^3 , i.e. 12 < 4(2.2)3 < 64 etc.;
2) a (r1) b < a (r2) b for any a>2, b>2, and 0<r1<r2, for example:
12 < 5(1.2)7 < 5(1.5)7 < 5(1.9)7 < 35;
3) f(x) = n (x) m should be continuously differenciable for n>2, m>2, and 0<x<infinity.
One more important condition is that a (r) b must be able to be calculated via iterations of a (r-1) b for any real r, e.g.:
5 (1.32) 4 = 5 (0.32) 5 (0.32) 5 (0.32) 5,
where 7 < 5 (0.32) 5 < 10, and 9 < 5 (1.32) 4 < 20;
8 (2.7) 3 = 8 (1.7) 8 (1.7) 8,
where 16 < 8 (1.7) 8 < 64, and 24 < 8 (2.7) 3 < 512
Finally, we could define an "anti-order" reverse operator
x = S anti-order (a, b), for which a (x) b = S, e.g.:
17 anti-order (10, 7) = 1 (because 10(1)7 = 17);
70 anti-order (10, 7) = 2 (because 10(2)7 = 70);
10,000,000 anti-order (10, 7) = 3 (because 10(3)7 = 10,000,000) etc.
My hypothesis is that the anti-order operation is defined at least for any a>2, b>2 and S>max(a,b), i.e.:
0 < 14 anti-order (10, 7) < 1;
1 < 42 anti-order (10, 7) < 2;
2 < 2,000 anti-order (10, 7) < 3 etc.
The anti-order function should be continuously differenciable for a=const>2, b=const>2, and 2<S<infinity.
That is an open problem yet... Beloturkin 15:17, 8 September 2006 (UTC)
- The extension of hyper-operators to non-integer ranks is a simple matter of rearranging the definition to push values into a specified interval. Once the values are within this interval, then you can interpolate these values however you want, provided that the values coincide with the integer values already defined. A great example of the extension of a single hyper-operator to non-integer values is that of tetration and the super-logarithm (specifically the linear approximation sections). For this kind of extension, to extend the hyper(n)operator to non-integer inputs, one needs the hyper(n-1)operator and the hyper(n-1)logarithm, and with these you can push the values into a specified interval so you only need to worry about interpolating between 2 values instead of interpolating between an infinite number of values. Since we are going to be needing hyper-logarithms to extend all hyper-operators to non-integer values, we are going to need extensions of all hyper-logarithms as well. We can start rearranging the formula now. The definition of hyper-operators is hyn(x,y) = hyn − 1(x,hyn(x,y − 1)). Using this notation, the opposite of this formula is hyn(x,y) = hylogn − 1(x,hyn(x,y + 1)) since it pushes the values in the opposite direction. Combining both of these into a single formula, we can form a piecewise-defined extension of the hyper(n)operator as:
- The corresponding definition of hyper-logarithms requires more thought. A hyper-logarithm is a function that satisfies hylogn(x,z) = y iff hyn(x,y) = z so each of the pushers above can be translated into hylogn(x,z) = hylogn(x,hyn − 1(x,z)) − 1 for the first formula, and hylogn(x,z) = hylogn(x,hylogn − 1(x,z)) + 1 for the second formula. Putting both of these together, we can form a piecewise-defined extension of the hyper(n)logarithm as:
- One thing to note is that these only work for . For lower ranks, a different "critical piece" is required (and a different interval for the hyper-logarithm). Now for the fun part. Everything I have just been talking about has been in the general arena of defining hyper(n) in terms of hyper(n-1) for real x, y. Since both the extension of hyper(n)operators and the extension of hyper(n)logarithms given above are in some sense pushing values of n one unit downwards, we can use this as a pusher for interpolating hyper(n)operators to non-integer n as follows. Use some kind of interpolation between addition and multiplication, for example a linear interpolation (since both addition and multiplication are linear, it makes sense that everything between them is too), and use this as the definition of hyper-operators for n between 1 and 2, then use our n-pushers to bring all higher n to fall within this interval. There is a big problem with this. The problem is that we have a definition for and but no definition suitable for . I have yet to solve this part, but the sub-pushers in the extension of hyper-operators and hyper-logarithms can be used to push values into the square-interval which makes interpolation easier. Once you have some kind of interpolation in this interval, then you have a hyper-operator that takes values for all real , and is almost continuous, if you have done the interpolation correctly. For my original post on this, click here for the forum. AJRobbins (talk) 07:32, 10 December 2007 (UTC)
[edit] Hyper 0 and other hyper integers
I'm not sure if someone said this already, but what about a hypothetical hyper 0? I think I have a defintion, for any a, the hyper 0 operation of a would be a# = a + 1. This fits in with the idea that addition is hyper one, it's chained counting, and that would be a "count" function.
Also, what about hyper negative numbers? They could be the inverse functions, hyper negative one would be subtraction, hyper -2 division, then roots, super logs, etc.
Sorry if I'm doing something wrong here, the whole wiki thing is a bit new to me. —Preceding unsigned comment added by 68.198.122.160 (talk) 01:04, 12 December 2007 (UTC)
- There are many problems with what you are talking about. One is that there are multiple definitions of hyper0 depending on which axioms you hold to carry over. If you think that the property hyn(2,2) = 4 is more important than anything else, then you get hy0(y,y) = y + 2. If you think that the inverse relation hyn − 1(x,y) = hyn(x,hylogn(x,y) + 1) is more important than anything else, then you will get hy0(x,y) = y + 1, and this will also give the same answer for all negative hyper-operators, so hy( − 1)(x,y) = y + 1 and hy( − 2)(x,y) = y + 1as well, and so on. The problem with this is that the iteration of these operators give addition prematurely, and so it is debatable whether these can be considered hyper-operators as well. So with these included we actually have 3 different definitions for negative hyper-operators: (1) All the successor function, (2) the hyper-roots, and (3) the hyper-logarithms as you suggest (although you mention roots with super-logs which are not of the same type), since all hyper-operators addition and above have two inverses. Either way is arbitrary, and debatable. AJRobbins (talk) 19:30, 12 December 2007 (UTC)