Hey folks,

the conjecture about the equality of the 3 methods of tetration is shattered.

I received an e-mail of Dan Asimov where he mentions that the continuous iterations of \( b^x \) at the lower and upper real fixed points, \( b=\sqrt{2} \), differ! He, Dean Hickerson and Richard Schroeppel found this arround 1991, however there is no paper about it.

The numerical computations veiled this fact because the differences are in the order of \( 10^{-24} \). I reverified this by setting the computation exactness to 100 decimal digits and using the recurrence formula described here:

\( f^{\circ t}(x)=\lim_{n\to\infty} f^{\circ n}(a(1-r^t) + r^t f^{\circ -n}(x)) \), where \( a \) is the fixed point of \( f \) and \( r=f'(a) \) and \( f(x)=\sqrt{2}^x \).

Currently (running since a day) there is a computation in progress where I compute the differences over the interval \( 2..4 \) with exactness of 150 (and internally to 450) decimal digits (lets have a look whether it finishes in my life time ). I will post the graph here when finished.

Generally we can assume that it is rather the exception that the regular iterations at two different fixed points are the same functions. Moreover I actually dont know any analytic function except the identity function where this would be the case!

So the first lesson is: dont trust naive numerical verifcations. We have to reconsider the equality of our 3 methods and I guess there will show up differences too.

But apart from that we have also hard non-numerical consequences:

There can not be *any* analytic function \( f \) with \( f( f(x))=\sqrt{2}^x \) that is analytic in \( (2-\eps,4+\eps) \). At least one of the fixed points is a singularity in the sense that \( \lim_{x\to a} f'(x) \) does not exist (however of course \( \lim_{x\to a} f(x)=a \)). It applies particularly to Andrew's \( f(x)=\text{sexp}_{\sqrt{2}}(\text{slog}_{\sqrt{2}}(x)+1/2) \) and Gottfried's solution. And of course that probably is true for any base \( 1<b<\eta \).

I can just demonstrate it with the continuous iteration of the function

\( f(x)=x^2+x-1/16 \). The effect is similar to \( f(x)=b^x \) however the differences already occur at exactness \( 10^{-15} \) so the computations are not that expensive. Lets have a look at the graph of the function:

It has two fixed points, one at \( -1/4 \) and one at \( 1/4 \) with the slope \( 1/2 \) and \( 3/2 \) respectively. So the condition of positive \( \neq 1 \) slope of regular iteration is satisfied. By the previously given formula we compute the iterative square root at both fixed points and their difference:

We see this oscillating behaviour, which implies if the first derivative for one function at a fixed point exists it does not exist for the other.

the conjecture about the equality of the 3 methods of tetration is shattered.

I received an e-mail of Dan Asimov where he mentions that the continuous iterations of \( b^x \) at the lower and upper real fixed points, \( b=\sqrt{2} \), differ! He, Dean Hickerson and Richard Schroeppel found this arround 1991, however there is no paper about it.

The numerical computations veiled this fact because the differences are in the order of \( 10^{-24} \). I reverified this by setting the computation exactness to 100 decimal digits and using the recurrence formula described here:

\( f^{\circ t}(x)=\lim_{n\to\infty} f^{\circ n}(a(1-r^t) + r^t f^{\circ -n}(x)) \), where \( a \) is the fixed point of \( f \) and \( r=f'(a) \) and \( f(x)=\sqrt{2}^x \).

Currently (running since a day) there is a computation in progress where I compute the differences over the interval \( 2..4 \) with exactness of 150 (and internally to 450) decimal digits (lets have a look whether it finishes in my life time ). I will post the graph here when finished.

Generally we can assume that it is rather the exception that the regular iterations at two different fixed points are the same functions. Moreover I actually dont know any analytic function except the identity function where this would be the case!

So the first lesson is: dont trust naive numerical verifcations. We have to reconsider the equality of our 3 methods and I guess there will show up differences too.

But apart from that we have also hard non-numerical consequences:

There can not be *any* analytic function \( f \) with \( f( f(x))=\sqrt{2}^x \) that is analytic in \( (2-\eps,4+\eps) \). At least one of the fixed points is a singularity in the sense that \( \lim_{x\to a} f'(x) \) does not exist (however of course \( \lim_{x\to a} f(x)=a \)). It applies particularly to Andrew's \( f(x)=\text{sexp}_{\sqrt{2}}(\text{slog}_{\sqrt{2}}(x)+1/2) \) and Gottfried's solution. And of course that probably is true for any base \( 1<b<\eta \).

I can just demonstrate it with the continuous iteration of the function

\( f(x)=x^2+x-1/16 \). The effect is similar to \( f(x)=b^x \) however the differences already occur at exactness \( 10^{-15} \) so the computations are not that expensive. Lets have a look at the graph of the function:

It has two fixed points, one at \( -1/4 \) and one at \( 1/4 \) with the slope \( 1/2 \) and \( 3/2 \) respectively. So the condition of positive \( \neq 1 \) slope of regular iteration is satisfied. By the previously given formula we compute the iterative square root at both fixed points and their difference:

We see this oscillating behaviour, which implies if the first derivative for one function at a fixed point exists it does not exist for the other.