(10/28/2021, 07:18 PM)Ember Edison Wrote: @sheldonison So we can rebuild Kneser's method tetration using the beta method? In any case, the numerical approximation algorithm of the beta method seems to be much better.
@JmsNxn You missed super-root. The robustness of the beta method seems well suited for generating super-root.
Absolutely no idea how to generate the super root. I'd have to think on that...
And, Sheldon and I disagree at a certain point. He claims that \(\text{tet}_{\lambda}(z)\) is nowhere analytic on \(\mathbb{R}\) and I agree. I do not agree that it's nowhere holomorphic; which he believes is probably the case. And secondly, the Kneser tetration would only come about as \(\lambda \to 0\), and that's hell to calculate--pretty much impossible. So I have to create a better, more rigorours well thought out argument, because numerical calculations are impossible. But, as I said, as \(\lambda \to 0\) we get faster and faster convergence towards \(L,L^*\), and I think it has a chance at being holomorphic. If it is, it's Kneser per Paulsen & Cowgill.
We see this in the Shell-Thron region too, where as \(\lambda \to 0\) we go faster and faster to the attractive fixed point of \(\log\). For \(\sqrt{2}\) each iteration shoots us to 4 incredibly fast the smaller \(\lambda \) is, and then we have to sort of push it forward by renormalizing each step.
I think this'll happen similarly for e as \(\lambda \to 0\).
In such a sense, we're looking at:
\[
\begin{align}
F_n(s) &= \log^{\circ n} \beta_{\lambda_n}(s+n + p_n)\\
F_n(0) &= 1\\
\lambda_n &= \mathcal{O}(n^{-\delta})\\
\end{align}
\]
And looking to show that \(F_n \to \text{tet}\). Where at worst \(p_n = O(n^{1-\delta})\) for \(1 \ge \delta > 0\). But it's still up in the air at the moment. And Sheldon has some great counter numerical evidence!

