In Andrew Robbins paper [1], he mentions three properties, two of which are applicable to tetration:
Property 1. Iterated exponential property
\( ^{y}x=x^{\left({ ^{y-1} x }\right)} \) for all real y.
Property 3. Infinite differentiability property
\( \large f(x)\ \text{is}\ C^\infty\ \equiv\ D^k f(x) \) exists for all integer k.
At the bottom of page three, Andrew goes on to say:
So, I set out to work on a hunch I had (see my posts on sci.math.research for the line of study I was on when I had this hunch), and came up with the following formula:
\( T(x,\ y,\ n) = \left{
\begin{eqnarray} \alpha_0\ +\ y\ +\ x^{T(x,\ y-1,\ n-1)} & , & n\ >\ 0 \\ \alpha_0\ +\ y & , & n\ =\ 0 \end{eqnarray} \right.
\\ \
\\ \
\\ \
\\
{\Large ^y x}\ =\ \lim_{m,n\to\infty}{ln^{\small (m)}T(x,m+y,m+n)} \)
Here, \( \alpha_0 \) is a constant that shifts the function left or right so that it's in the "correct" place. \( ln^{\small (m)}(z) \) means iterating the logarithm m times.
Of course, due to the nature of the formula, even if you can find a suitable \( \alpha_0 \) for a particular value (e.g., x^^0=1), the formula won't precisely satisfy property 1 unless you take the limits all the way to infinity. But it can get close enough with a relatively small m so that it exceeds the precision of any possible physical computer.
But the point is, the function is infinitely differentiable, satisying property 3. And, as m and n are increased sufficiently to the point where machine precision is exceeded, property 1 is satisfied as well.
However, a quick check shows that the values calculated by my formula aren't remotely close to Andrews.
I started to panic at this point. Andrew's method aims to satisfy both properties 1 and 3, and I was fairly certain (without writing a very long and esoteric proof) that my formula does as well, and yet we get very different answers. How could this be?
And then it hit me. Think of the iterated multiplication formula (you know, "exponentiation"). It satisfies the following three criteria (the first to give us a known reference point):
\( E(a,\ 0)\ =\ 1\\ \
\\
E(a,\ z+1)\ =\ a{E(a,\ z)}\\ \
\\
E\ \text{is}\ C^{\infty} \)
Okay, so that's sufficient to define the Exponentiation function uniquely, right?
Well, what about this formula:
\( F(a,\ z)\ =\ E\left({a,\ z+0.1\sin{\left({2\pi z}\right)}}\right) \)
Is Property 1b (iterated multiplication property) still valid?
\( F(a,\ z+1)\ \overset{\tiny \text{?}}{=}\ a F(a,\ z)
\\ E\left({a,\ (z+1)+0.1\sin{\left({2\pi (z+1)}\right)}}\right)\ \overset{\tiny \text{?}}{=}\ a E\left(a,\ {z+0.1\sin{\left({2\pi z}\right)}}\right)
\\ E\left({a,\ (z+1)+0.1\sin{\left({2\pi z}\right)}}\right)\ \overset{\tiny \text{?}}{=}\ a E\left({a,\ z+0.1\sin{\left({2\pi z}\right)}}\right)
\\ E\left({a,\ \left[(z+0.1\sin{\left({2\pi z}\right)}\right]+1}\right)\ \overset{\tiny \text{?}}{=}\ a E\left(a,\ \left[{z+0.1\sin{\left({2\pi z}\right)}}\right]\right)
\\ E\left({a,\ Z+1}\right)\ \overset{\tiny \surd}{=}\ a E\left(a,\ Z\right) \)
The sine function has a period of 1, so this function still has property 1. And this trivial change obviously hasn't altered the infinite differentiability property either. So as you can see, properties 1 and 3, unfortunately, are not sufficient to define a unique solution.
Note: I made a similar claim about the Gamma function in a personal correspondence with Andrew Robbins, but after thinking it through, I realize I was mistaken in that particular case. Because the Gamma function is fixed relative to its input values, its inputs cannot be shifted around.
Iterated multiplication (i.e., exponentiation) does not suffer this drawback. Of course, this makes defining the "correct" formula for iterated multiplication a little tricky. Pretend you know nothing about exponentiation, and try to figure out the "correct" formula. One can interpolate with square roots, cube roots, etc., which only gives you values for rational inputs. Using Cauchy sequences, one could prove that this extends to real answers, but until one comes up with a series expansion or defines a limit for repeated multiplication by \( \small {1+\epsilon} \), for example, how can one be sure their solution is correct?
So, never fear. My formula can satisfy both of these properties and still be "wrong". I'm not upset about it, because now I have a new pursuit: trying to define a third uniqueness criterion (fourth if you count a reference point as a criterion).
I'd be interested if anyone knows of any resources that give a good such criterion, including a decent explanation of why that criterion is a good one.
[1] Robbins, Andrew (2005)
Solving for the Analytic Piecewise Extension of Tetration and the Super-logarithm
link: http://tetration.itgo.com/paper.html
PS: I've never used TeX before, so apologies if I went overboard.
Edit: Moved postscript and updated LaTeX formats (I'm still learning how to use this).
Property 1. Iterated exponential property
\( ^{y}x=x^{\left({ ^{y-1} x }\right)} \) for all real y.
Property 3. Infinite differentiability property
\( \large f(x)\ \text{is}\ C^\infty\ \equiv\ D^k f(x) \) exists for all integer k.
At the bottom of page three, Andrew goes on to say:
Quote:It is the goal of this paper, however, to show that these properties are sufficient to find such an extension, and that the extension found will be unique.
So, I set out to work on a hunch I had (see my posts on sci.math.research for the line of study I was on when I had this hunch), and came up with the following formula:
\( T(x,\ y,\ n) = \left{
\begin{eqnarray} \alpha_0\ +\ y\ +\ x^{T(x,\ y-1,\ n-1)} & , & n\ >\ 0 \\ \alpha_0\ +\ y & , & n\ =\ 0 \end{eqnarray} \right.
\\ \
\\ \
\\ \
\\
{\Large ^y x}\ =\ \lim_{m,n\to\infty}{ln^{\small (m)}T(x,m+y,m+n)} \)
Here, \( \alpha_0 \) is a constant that shifts the function left or right so that it's in the "correct" place. \( ln^{\small (m)}(z) \) means iterating the logarithm m times.
Of course, due to the nature of the formula, even if you can find a suitable \( \alpha_0 \) for a particular value (e.g., x^^0=1), the formula won't precisely satisfy property 1 unless you take the limits all the way to infinity. But it can get close enough with a relatively small m so that it exceeds the precision of any possible physical computer.
But the point is, the function is infinitely differentiable, satisying property 3. And, as m and n are increased sufficiently to the point where machine precision is exceeded, property 1 is satisfied as well.
However, a quick check shows that the values calculated by my formula aren't remotely close to Andrews.
I started to panic at this point. Andrew's method aims to satisfy both properties 1 and 3, and I was fairly certain (without writing a very long and esoteric proof) that my formula does as well, and yet we get very different answers. How could this be?
And then it hit me. Think of the iterated multiplication formula (you know, "exponentiation"). It satisfies the following three criteria (the first to give us a known reference point):
\( E(a,\ 0)\ =\ 1\\ \
\\
E(a,\ z+1)\ =\ a{E(a,\ z)}\\ \
\\
E\ \text{is}\ C^{\infty} \)
Okay, so that's sufficient to define the Exponentiation function uniquely, right?
Well, what about this formula:
\( F(a,\ z)\ =\ E\left({a,\ z+0.1\sin{\left({2\pi z}\right)}}\right) \)
Is Property 1b (iterated multiplication property) still valid?
\( F(a,\ z+1)\ \overset{\tiny \text{?}}{=}\ a F(a,\ z)
\\ E\left({a,\ (z+1)+0.1\sin{\left({2\pi (z+1)}\right)}}\right)\ \overset{\tiny \text{?}}{=}\ a E\left(a,\ {z+0.1\sin{\left({2\pi z}\right)}}\right)
\\ E\left({a,\ (z+1)+0.1\sin{\left({2\pi z}\right)}}\right)\ \overset{\tiny \text{?}}{=}\ a E\left({a,\ z+0.1\sin{\left({2\pi z}\right)}}\right)
\\ E\left({a,\ \left[(z+0.1\sin{\left({2\pi z}\right)}\right]+1}\right)\ \overset{\tiny \text{?}}{=}\ a E\left(a,\ \left[{z+0.1\sin{\left({2\pi z}\right)}}\right]\right)
\\ E\left({a,\ Z+1}\right)\ \overset{\tiny \surd}{=}\ a E\left(a,\ Z\right) \)
The sine function has a period of 1, so this function still has property 1. And this trivial change obviously hasn't altered the infinite differentiability property either. So as you can see, properties 1 and 3, unfortunately, are not sufficient to define a unique solution.
Note: I made a similar claim about the Gamma function in a personal correspondence with Andrew Robbins, but after thinking it through, I realize I was mistaken in that particular case. Because the Gamma function is fixed relative to its input values, its inputs cannot be shifted around.
Iterated multiplication (i.e., exponentiation) does not suffer this drawback. Of course, this makes defining the "correct" formula for iterated multiplication a little tricky. Pretend you know nothing about exponentiation, and try to figure out the "correct" formula. One can interpolate with square roots, cube roots, etc., which only gives you values for rational inputs. Using Cauchy sequences, one could prove that this extends to real answers, but until one comes up with a series expansion or defines a limit for repeated multiplication by \( \small {1+\epsilon} \), for example, how can one be sure their solution is correct?
So, never fear. My formula can satisfy both of these properties and still be "wrong". I'm not upset about it, because now I have a new pursuit: trying to define a third uniqueness criterion (fourth if you count a reference point as a criterion).
I'd be interested if anyone knows of any resources that give a good such criterion, including a decent explanation of why that criterion is a good one.
[1] Robbins, Andrew (2005)
Solving for the Analytic Piecewise Extension of Tetration and the Super-logarithm
link: http://tetration.itgo.com/paper.html
PS: I've never used TeX before, so apologies if I went overboard.
Edit: Moved postscript and updated LaTeX formats (I'm still learning how to use this).