I thought I'd point out the way I've been programming lately; and the fact that my method of programming doesn't work perfectly. Because pari-gp protocols don't allow exponentials larger than \( \exp(1\text{E}6) \); I've had to shortcut my code, as the recursive process hits values much higher than that before becoming perfectly accurate. For that reason I'm beginning to alter my approach.
My first way of doing this was by creating a matrix add on, where we calculate the taylor series of \( \varphi \). This means we create a small INIT.dat file which stores all the \( a_k \),
\(
\beta_\lambda(s) = \sum_{k=1}^\infty a_k e^{k\lambda s} \,\,\text{for}\,\,\Re(s) < 1\\
\)
The second breakthrough came by when looking at Tommy's Gaussian method. Which, Tommy's method admits no exponential series like this. But what starts to happen for large values is something magical. I never noticed it before, but it's exactly what I was missing.
Recall that,
\(
\log\beta_\lambda(s+1) = \beta_\lambda(s)-\log(1+e^{-\lambda s})\\
\)
And that,
\(
\tau_\lambda^{n}(s) = \log^{\circ n} \beta_\lambda(s+n) - \beta_\lambda(s)\\
\)
which satisfy the recursion,
\(
\tau_\lambda^{n+1}(s) = \log(1 + \frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)}) - \log(1+e^{-\lambda s})\\
\)
Now, for large \( s \) the value,
\(
\log(1 + \frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)}) = \frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)} + \mathcal{O}(\frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)})^2\\
\)
But even better, this asymptotic works so long as \( \beta_\lambda(s+1) \) is large. But \( \beta_\lambda \) is an asymptotic solution to tetration; \( \beta_\lambda(4) \) is already astronomical. So let's just scrap the \( \log(1+x) \) altogether! This gives us the chance to make an asymptotic approximation that always works.
\(
\tau_\lambda^1(s) = -\log(1+e^{-\lambda s})\\
\tau_\lambda^{n+1}(s) \sim \frac{\tau_\lambda^{n}(s+1)}{\beta_\lambda(s+1)} -\log(1+e^{-\lambda s})\\
\)
BUT THIS JUST PRODUCES AN ASYMPTOTIC SERIES!!!!!!!!!!!!!!!!!!!
So effectively it constructs an asymptotic solution to the \( \beta \)-method; pulling back with log's is even easier now. We can choose the depth of the asymptotic series. This is very reminiscent of Kouznetsov; but again, this isn't Kneser's solution.
I'll derive the asymptotic series here. This is mostly important for bypassing errors with iterated logarithms; and furthermore; it is meant for high speed calculation of the \( \beta \)-method far out in the complex plane.
Starting with the recursion,
\(
\rho_\lambda^1(s) = -\log(1+e^{-\lambda s})\\
\rho_\lambda^{n+1}(s) = \frac{\rho_\lambda^{n}(s+1)}{\beta_\lambda(s+1)} -\log(1+e^{-\lambda s})\\
\)
We get that,
\(
\rho_\lambda(s) = \lim_{n\to\infty} \rho_\lambda^n(s) = -\sum_{k=0}^\infty \dfrac{\log(1+e^{-\lambda(s+k)})}{\prod_{j=1}^k \beta_\lambda(s+j)}\\
\)
which satisfies the functional equation,
\(
\rho_\lambda(s) = \frac{\rho_\lambda(s+1)}{\beta_\lambda(s+1)} -\log(1+e^{-\lambda s})\\
\)
Which means that,
\(
\tau_\lambda \sim \rho_\lambda\,\,\text{as}\,\,\Re(s) \to \infty\\
\)
Which further means, that,
\(
\text{tet}_\beta(s+x_0) = \beta_\lambda(s) + \tau_\lambda(s) \sim \beta_\lambda(s) + \rho_\lambda(s)\,\,\text{as}\,\,\Re(s) \to \infty\\
\tau_\lambda \sim \rho_\lambda\\
\)
This is very important because \( \rho_\lambda = \tau_\lambda \) upto a hundred digits at about \( \Re(s) > 5 \); but \( \rho \) is so much easier to calculate. Furthermore, I believe this method will work with Tommy's Gaussian choice; where we swap \( -\log(1+e^{-\lambda s}) \) with \( \log(A(s+1)) \) where,
\(
A(s) = \frac{1}{\sqrt{\pi}}\int_{-\infty}^s e^{-x^2}\,dx\\
\)
And further more; since all of these choices are asymptotic to each other: Tommy's Gaussian method equals the beta method!
I'm planning to set this in stone in my next paper. Construct an arbitrarily accurate asymptotic to tetration (a la beta).
Regards, James
tl;dr
\(
\tau_\lambda(s) = -\sum_{k=0}^n \frac{\log(1+e^{-\lambda(s+k)})}{\prod_{j=1}^k\beta_\lambda(s+j)} + \mathcal{O}(1/\beta_\lambda(s+n+1))\,\,\text{as}\,\,\Re(s) \to \infty\\
\)
My first way of doing this was by creating a matrix add on, where we calculate the taylor series of \( \varphi \). This means we create a small INIT.dat file which stores all the \( a_k \),
\(
\beta_\lambda(s) = \sum_{k=1}^\infty a_k e^{k\lambda s} \,\,\text{for}\,\,\Re(s) < 1\\
\)
The second breakthrough came by when looking at Tommy's Gaussian method. Which, Tommy's method admits no exponential series like this. But what starts to happen for large values is something magical. I never noticed it before, but it's exactly what I was missing.
Recall that,
\(
\log\beta_\lambda(s+1) = \beta_\lambda(s)-\log(1+e^{-\lambda s})\\
\)
And that,
\(
\tau_\lambda^{n}(s) = \log^{\circ n} \beta_\lambda(s+n) - \beta_\lambda(s)\\
\)
which satisfy the recursion,
\(
\tau_\lambda^{n+1}(s) = \log(1 + \frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)}) - \log(1+e^{-\lambda s})\\
\)
Now, for large \( s \) the value,
\(
\log(1 + \frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)}) = \frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)} + \mathcal{O}(\frac{\tau_\lambda^n(s+1)}{\beta_\lambda(s+1)})^2\\
\)
But even better, this asymptotic works so long as \( \beta_\lambda(s+1) \) is large. But \( \beta_\lambda \) is an asymptotic solution to tetration; \( \beta_\lambda(4) \) is already astronomical. So let's just scrap the \( \log(1+x) \) altogether! This gives us the chance to make an asymptotic approximation that always works.
\(
\tau_\lambda^1(s) = -\log(1+e^{-\lambda s})\\
\tau_\lambda^{n+1}(s) \sim \frac{\tau_\lambda^{n}(s+1)}{\beta_\lambda(s+1)} -\log(1+e^{-\lambda s})\\
\)
BUT THIS JUST PRODUCES AN ASYMPTOTIC SERIES!!!!!!!!!!!!!!!!!!!
So effectively it constructs an asymptotic solution to the \( \beta \)-method; pulling back with log's is even easier now. We can choose the depth of the asymptotic series. This is very reminiscent of Kouznetsov; but again, this isn't Kneser's solution.
I'll derive the asymptotic series here. This is mostly important for bypassing errors with iterated logarithms; and furthermore; it is meant for high speed calculation of the \( \beta \)-method far out in the complex plane.
Starting with the recursion,
\(
\rho_\lambda^1(s) = -\log(1+e^{-\lambda s})\\
\rho_\lambda^{n+1}(s) = \frac{\rho_\lambda^{n}(s+1)}{\beta_\lambda(s+1)} -\log(1+e^{-\lambda s})\\
\)
We get that,
\(
\rho_\lambda(s) = \lim_{n\to\infty} \rho_\lambda^n(s) = -\sum_{k=0}^\infty \dfrac{\log(1+e^{-\lambda(s+k)})}{\prod_{j=1}^k \beta_\lambda(s+j)}\\
\)
which satisfies the functional equation,
\(
\rho_\lambda(s) = \frac{\rho_\lambda(s+1)}{\beta_\lambda(s+1)} -\log(1+e^{-\lambda s})\\
\)
Which means that,
\(
\tau_\lambda \sim \rho_\lambda\,\,\text{as}\,\,\Re(s) \to \infty\\
\)
Which further means, that,
\(
\text{tet}_\beta(s+x_0) = \beta_\lambda(s) + \tau_\lambda(s) \sim \beta_\lambda(s) + \rho_\lambda(s)\,\,\text{as}\,\,\Re(s) \to \infty\\
\tau_\lambda \sim \rho_\lambda\\
\)
This is very important because \( \rho_\lambda = \tau_\lambda \) upto a hundred digits at about \( \Re(s) > 5 \); but \( \rho \) is so much easier to calculate. Furthermore, I believe this method will work with Tommy's Gaussian choice; where we swap \( -\log(1+e^{-\lambda s}) \) with \( \log(A(s+1)) \) where,
\(
A(s) = \frac{1}{\sqrt{\pi}}\int_{-\infty}^s e^{-x^2}\,dx\\
\)
And further more; since all of these choices are asymptotic to each other: Tommy's Gaussian method equals the beta method!
I'm planning to set this in stone in my next paper. Construct an arbitrarily accurate asymptotic to tetration (a la beta).
Regards, James
tl;dr
\(
\tau_\lambda(s) = -\sum_{k=0}^n \frac{\log(1+e^{-\lambda(s+k)})}{\prod_{j=1}^k\beta_\lambda(s+j)} + \mathcal{O}(1/\beta_\lambda(s+n+1))\,\,\text{as}\,\,\Re(s) \to \infty\\
\)