Extending tetration to base e
#1
See my exact solution for base e^(1/e) for background. Assuming the solution I provided is accepted as unique (i.e., correct), it should provide strength to the claim that the following solution is correct as well.

Let's start with the basics. Here, for brevity, assume that eta is e^(1/e). (Eta looks like an "n", but it's effectively a greek letter "e", and given its relationship to the constant e, I thought it would make a good symbol.)

\( \begin{eqnarray}
\eta^y
& = & (e^{\frac {\tiny 1}{e}})^y \\
\\[5pt]

\\
& = & e^{{\frac {\tiny 1}{e}}y}
\end{eqnarray} \)

Okay, so far, so good. Now, try this one on for size:

\( \begin{eqnarray}
\eta^{(e^y)}
& = & (e^{\frac {\tiny 1}{e}})^{(e^y)} \\
\\[5pt]

\\
& = & e^{\left({\frac {\tiny 1}{e}}\ \times\ {e^y}\right)} \\
\\[5pt]

\\
& = & e^{\left(e^{(y-1)}\right)}
\end{eqnarray} \)

Interesting... But is this useful? Well, let's go one more:

\( \begin{eqnarray}
{\Large \eta^{\left(e^{(e^y)}\right)}}
& = & \left(\large e^{\frac {\tiny 1}{e}}\right)^{\left(\large e^{(e^y)}\right)} \\
\\[10pt]

\\
& = & e^{\left({\frac {\tiny 1}{e}}\ \times\ {\large e^{\left(e^y\right)}}\right)} \\
\\[10pt]

\\
& = & e^{\large e^{\left( (e^y)-{\normalsize 1}\right)}} \\
\\[10pt]

\\
& = & e^{\large e^{\left( (e^y)\left(1-{\normalsize \delta}\right) \right)}},\text{ where } \delta\ =\ \frac{1}{e^y}
\end{eqnarray} \)

Hopefully it's apparent where I'm going with this. Let y be the mth tetration of e, with m=5 sufficient to exceed machine precision in all practical circumstances (when evaluating delta at m-1=4).

\( \begin{eqnarray}
{\Large \eta^{(^m e)}}
& = & {\Large \eta^{\Large \left(e^{(e^{(^{(m-2)} e)})}\right)}} \\
\\[10pt]

\\
& = & \left(\large e^{\frac {\tiny 1}{e}}\right)^{\left(\Large e^{(e^{(^{(m-2)} e)})}\right)} \\
\\[10pt]

\\
& = & e^{\left({\frac {\tiny 1}{e}}\ \times\ {\Large e^{\left(e^{(^{(m-2)} e)}\right)}}\right)} \\
\\[10pt]

\\
& = & {\Large e^{e^{\large \left( (e^{(^{(m-2)} e)})-{\normalsize 1}\right)}}} \\
\\[10pt]

\\
& = & {\Large e^{\large e^{\large \left({(^{(m-1)} e)\left(1-{\normalsize \delta}\right) }\right)}}},\text{ where } \delta\ =\ \frac{1}{(^{(m-1)} e)}
\end{eqnarray} \)

Fascinating. But still, is this useful? Well, for this, we need a new function, based on tetration of base eta, but which equals e at negative infinity, and which equals something greater than e at y = 0. Think of it as taking eta^^y just below its asymptote, all the way to infinity and beyond, wrapping around at negative infinity, but just above the asymptote instead of below. I call this new function eta_b-check, where b is a particular base we're interesting in. Here's how it looks, omitting the b for the assumed base e.

\( \begin{eqnarray}
{\Large ^{\normalsize(y+1)} \check \eta}
& = & {\Large \eta^{\left(^{y} \check \eta\right)}} \\
\\[10pt]

\\
\text{ or, alternatively:} \\
\\[10pt]

\\
{\Large log_{\eta}\left(^{\normalsize y} \check \eta \right)}
& = & {\Large ^{\normalsize (y-1)} \check \eta} \\
\\[10pt]

\\
{\Large ^{\normalsize (-\infty)} \check \eta} & = & {\Large e} \\
\\[10pt]

\\
{\Large ^{\normalsize y} \check \eta} & > & {\Large e},\text{ for all } y\ >\ -\infty
\end{eqnarray} \)

(The proof that there is a unique solution for eta-check is very similar to the one I gave for the tetration of eta. I can provide it in a separate post if required.)

As it turns out, we can even find the exact value of eta_b-check(0) with arbitrary precision, relative to any particular base. And I know that I'm not the first to find this function. At the very least, Peter Walker described its inverse, or something very similar, if I'm reading his paper correctly. So this function is not new. However, I haven't seen the following proof (Peter had something very close, so again, I can't take all the credit, if I can even take any at all.)

It turns out, there exists an exponent mu for eta-check such that:

\( {\Large ^{\mu} \check \eta\ =\ ^y e},\text{ for sufficiently large } y \)

I say sufficiently large, but really we're talking about a limit as we go to infinity. But beyond a certain point, mu increases pretty much linearly with y, with extreme precision (this has a very significant interpretation, which I'll get to if someone doesn't beat me to the punch). For y=4 or 5 or so, computer precision is not sufficient to tell the limited value of mu from the "approximate" value.

Anyway, here comes the fun part:

\( \begin{eqnarray}
{\Large ln^{\small(2)}\left(^{\normalsize(\mu)} \check \eta\right)}
& = & {\Large ln^{\small(2)}\left(^{\normalsize y} e\right)} \\
\\[10pt]

\\
& = & {\Large ^{\normalsize (y-2)} e} \end{eqnarray} \)

and

\( \begin{eqnarray}
{\Large ln^{\small(2)}\left(^{\normalsize(\mu+1)} \check \eta \right)}
& = & {\Large ln^{\small(2)}\left(\eta^{\left(^{\mu} \check \eta \right)} \right) } \\
\\[10pt]

\\
& = & {\Large ln^{\small(2)}\left(\eta^{\left(^{y} e\right)} \right) } \\
\\[10pt]

\\
& = & {\Large ln^{\small(2)}\left(e^{\large e^{\large \left({(^{(y-1)} e)\left(1-{\normalsize \delta}\right) }\right)}} \right) },\text{ where } \delta\ =\ \frac{1}{(^{(y-1)} e)} \\
\\[10pt]

\\
& = & {\Large {(^{\normalsize (y-1)} e)\left(1-{\normalsize \delta}\right)}},\text{ where } \delta\ =\ \frac{1}{(^{(y-1)} e)}
\end{eqnarray} \)

For sufficiently large y, this function is effectively exact (it is exact when you take the limit to infinity). Furthermore, notice that I swapped out the integer m for the real exponent y. Finally, notice that if eta-check is unique as I claim it is, then the tetration of base e that I just defined has a very strong claim on uniqueness. In other words, if some other method does not agree, I claim that this function is "correct", and the other function is displacing its exponents by some cyclic function.
Reply
#2
By the way, that tetration was so easily extended to base e should be a very strong indicator that all bases greater than eta can be solved by this method. The exact solution is very slow to converge, requiring ridiculous iteration counts, and double precision math is out of the question if you want more than a handful of digits.

However, I've already found some "helper" functions that reduce the number of iterations necessary, and for values of eta-check just above e, i.e., e(1+delta), we can program iterated exponential functions (base eta) for powers of 2 for the iteration counts:

\( \Large \eta^{\small(2^m)}(e(1+\delta))\ =\ e\ \times\ \sum_{n=0}^k \frac{a_{\small m,n} \delta^n}{n!} \)

Notice that when you're iterating, you pull the factor of e out, and you don't put it back in, because each time you iterate, you're going to have to pull it out again anyway.

The cutoff value k could be determined algorithmically, and the constants a_m,n could be calculated once on startup. Similar helper functions could be written for iterated logarithms. And depending on the accuracy you require, it's not out of the question to have helper functions up to 16 or 32 iterations, more if you've got a good math library. I'm currently trying to write 1, 2, 4, 8, and 16-iteration helper functions for eta^y and log_eta(y), using GMP. My time will be limited this weekend, so I suspect I won't finish that project for a few weeks.

An interpolation function for eta-check(-n), for large n, could be used to overcome the need to iterate a ridiculous number of times to get to the point where linear interpolation is accurate. Peter Walker's paper has a function which might be the correct one, or a good second-order approximation anyway. It needs to be transformed, because he was solving for the superlog.

A second order approximation can take 5 digits of accuracy up to 15, reducing the iteration count from 10^15 to just 10^5. (I know, "just" 10^5 iterations, is that all?) I get about 11-12 digits of accuracy using an 8000-iteration variant, which works out to about 1 part in 8000^3. I get about an extra digit compared to a 4000-iteration variant. I have the constants calculated in a table, sufficient to go up to 32000 iterations, but I don't have the patience to wait for it. But in theory, that one should give me 13-14 digits of accuracy, almost sufficient for subsequent manipulation with double precision math. Once I have my 16-count iteration functions written, I should be able to push my work out to 20 digits of accuracy.

For the interpolation function, I'll most likely end up using polynomial interpolation, with the degree of the polynomial limited by the iteration count. The good news is, with sufficient iterations, you should be able to get good third-order precision, which could extend 6 digits to 24 ("only" a million iterations, but with helper functions, this can be reduce to a few tens of thousands). I'm not sure who needs more than 24 digits accuracy, but if you do, well, it's going to take a few minutes to crunch the numbers.

Luckily, we can probably find very precise answers over a very short interval, and use those to figure out the first few derivatives. Even though accuracy might be limited to 24 digits, for example, precision over a short interval of length 0.001 should easily be 27-30 digits. For a large one-time cost, a distributed effort could even get 50-100 digits or more, with each computer calculating just one point. A large collection of points could be used to build a table of, say, all 999 points in the interval -1 to 0 with a spacing of 0.001, plus all 21 points in the interval [-0.00001, -0.00001] with spacing 0.000001.

And I'm leaving out that possibility that better helper functions are available to speed up convergence by another few factors. Anyway, lots to think about, but I need to get back to my mad scientist "lab".
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  [2sinh] exp(x) - exp( - (e-1) x), Low Base Constant (LBC) 1.5056377.. tommy1729 3 1,721 04/30/2023, 01:22 AM
Last Post: tommy1729
  [NT] Extending a Jacobi function using Riemann Surfaces JmsNxn 2 1,533 02/26/2023, 08:22 PM
Last Post: tommy1729
  Base -1 marraco 15 26,043 07/06/2022, 09:37 AM
Last Post: Catullus
  I thought I'd take a crack at base = 1/2 JmsNxn 9 6,304 06/20/2022, 08:28 AM
Last Post: Catullus
Big Grin Repetition of the last digits of a tetration of generic base Luknik 12 10,073 12/16/2021, 12:26 AM
Last Post: marcokrt
  On the [tex]2 \pi i[/tex]-periodic solution to tetration, base e JmsNxn 0 1,634 09/28/2021, 05:44 AM
Last Post: JmsNxn
  Galidakis & Extending tetration to non-integers Daniel 4 4,246 05/31/2021, 01:26 AM
Last Post: JmsNxn
  Math.Stackexchange.com question on extending tetration Daniel 3 4,198 03/31/2021, 12:28 AM
Last Post: JmsNxn
  A different approach to the base-change method JmsNxn 0 2,028 03/17/2021, 11:15 PM
Last Post: JmsNxn
  Complex Tetration, to base exp(1/e) Ember Edison 7 16,486 08/14/2019, 09:15 AM
Last Post: sheldonison



Users browsing this thread: 1 Guest(s)