Alright, so the Taylor series seem to be non-divergent, but a little slow.
This is the function:
\[
\beta_{1,y-e}(0) + \tau_{1,y-e}(0) = \lim_{n\to\infty}\log_{e^{y-e}}^{\circ n}\beta(n)\\
\]
Note, that this is a super function \(F(y,s) = \beta_{1,y-e}(s) + \tau_{1,y-e}(s)\) of \(e^{(y-e)z}\), but it is not normalized. So \(F(y,0) \neq 1\). This is pretty far to the right, and I think the normalization constant \(c(y)\) is about \(\approx - 100\), so take that as you will. Then expanding this taylor series: you get:
Which looks very promising, much better than \(b = e\) which starts to have sporadic jumps in the coefficients. It looks very regular and well behaved. Such that we can expect:
\[
F(y,s)\,\,\text{is holomorphic in}\,\,y\,\,\text{and}\,\,s\\
\]
But this would be restrictive about \(y \approx 0\), it would definitely start to diverge as we grow \(y\) too much, but it doesn't look too bad, in the sense that the domain should be non-trivial in which a super function exists.
We can also see further evidence of this when we graph the Weak Fatou set of \(y=-0.01\). The weak fatou set is precisely where the iterates of \(\beta\) are normal. This is precisely where a holomorphic superfunction can be constructed. As it seems to look, it doesn't diverge as much as I thought it would. I thought it'd be much more chaotic, but it seems relatively calm.
This is done over \(0 \le \Re(s) \le 2 \pi\) and \(0 \le \Im(s) \le \pi\). Where the white is spawned from the singularities.
The white areas are the weak Julia set, the black areas are the weak Fatou set. Black is good, white is bad, lol. There can still be discontinuities/misbehaviour in the black area, but they will always be measure zero, or trivial under an area measure, so that is very good. It's mostly black. The really suprising part, is in a graph I'm not posting. Though you can't see it when \(y = -0.1\), everything is black. Which means we are getting less chaos as we get further from Shell-Thron (at least on the real line). But not too bad. If you do the same process from \(\eta\) and grow past \(\eta\) everything INSTANTLY becomes entirely white, because the weak julia set is all of \(\mathbb{C}\), just like how the Julia set of \(b^z\) is all of \(\mathbb{C}\) for \(b > \eta\). Again, I think there's many more calming features to \(\eta_-\) that \(\eta\) doesn't have. And I think primarily it has to do with having four petals as opposed to two (second order neutral fixed point (multiplier = \(-1\))). This goes hand in hand with the periodic points which appear near by--\(b=e\) has no periodic points near \(\mathbb{R}\), and I think \(b < e^{-e}\) having said periodic points saves our ass.
I'll make a large graph of this tetration, but it's very slow going to complex graph these things, so I'm putting it off til later. You have to better understand the domains, and the precision needed, especially because beta.gp isn't the best program (it's definitely no fatou.gp). I'm making more plots for now.
Here are the functions:
\[
F(0,x),\,F(-0.01,x),\,F(-0.1,x)\\
\]
There is no obvious break in the sinusoid. Actually it's pretty damn calm. Now this is just the super function, again, not normalized, and fairly far out, the normalization constant is like \(c(y) \approx -100\)--where \(F(y,c(y)) = 1\). All of these graphs are made from the taylor approximation I posted before. Nothing chaotic is happening. The highest most one is \(y = 0\) and then \(y = -0.01\) and then \(y = -0.1\).
Analytically continuing between each should not be a problem at all. In fact, here is a graph of \(F(y,0)\) for \(y \in (-0.5,0.5)\)--which is again done through the same Taylor approximation as above (it's definitely an analytic expansion).
This is essentially a graph of \(b \in (\eta_- - 0.5,\eta_- + 0.5\)) of a super function \(F_b(x_0)\). Everything here is analytic and works in the complex plane. But I know you said you don't like complex graphs Bo.
I'm starting to be overwhelmed by evidence that we can construct a holomorphic abel function:
\[
\alpha(b,z)\\
\]
for \(|b-\eta_-| < \delta\) and \(\Re(z) > K \) (upto a branch cut) such that:
\[
\begin{align}
\alpha(b,b^z) &= \alpha(b,z) + 1\\
\alpha^{-1}(b,z + 2\pi i) &= \alpha^{-1}(b,z)\\
\exp_{b}^{\circ t}(z) &= \alpha^{-1}(b,\alpha(b,z) + t)\\
\end{align}
\]
So this wouldn't allow us to pass the regular iteration through the Shell thron boundary. But it would allow us to pass the \(\beta\) method through the shell thron boundary. Though only at \(b = \eta_-\). Even the \(\beta\) method has a clear branching problem at \(\eta\), which I thought was the end of the beta method.
Now, Bo, I know you love regular iteration method... You can always recover the regular iteration method through the beta method though. For example. With \(\sqrt{2}^z\), you can make the regular iteration about \(2\) which is \(-2\pi i/\log\log 2\) periodic. The \(\beta\) method can make tetrations holomorphic for \(2\pi i/\lambda\) periodic, so long as \(\Re(\lambda) > - \log \log 2\). But!!!!!!! as Sheldon, himself pointed out, limiting \(\lambda \to - \log \log 2\), we converge towards the regular iteration... So in a sense, the regular iteration becomes a boundary value of the beta iteration.
THIS MEANS THAT WE CAN THINK OF ETA MINOR AS A SUEZ CANAL OF SORTS!!!
To construct the regular iteration (which is kneser's iteration) about \(b = \eta_- - 0.5\), we can think of taking the beta iteration at this value but let \(\lambda \to \infty\), this will absolutely converge. When we think of the regular iteration (which is just the regular iteration this time) about \(b = \eta_- + 0.5\) we can think of \(\lambda \to c\) which produces the regular iteration period \(2 \pi i/c\).
I don't mean to self aggrandize, Bo. And I hate to be that kind of person. But I think we can talk about making a passage in the shell thron region about eta minor--and I think the beta method is telling us this. In that, making sure the regular iteration can break that boundary--maybe the beta method can help.
Also, I think this would work as a tetration which has its branch cut PRECISELY along \(b \in (-\infty,0) \cup (\eta,\infty\)\). So this method will not be reconcilable with the Kneser method for \(b > \eta\), along this line we'll see our branch cut in \(b\). This might be a different kind of tetration. But it would take the regular iteration along \(b = \sqrt{2}\), and not take the Kneser iteration with non zero imaginary part (which Sheldon always hated in his program, at least, as we talked about it).
I truly believe that eta minor is going to be something like the root 2 iteration about 4, instead about 2, and it'll produce something very different--while being 10 digits the same.
THIS IS FURTHER SUPPORTED BY RIEMANN
If I take a regular iteration at \(b > \eta\) and as I let \(b\to\eta\) we have a second order branching problem, Where upon as we continue \(b < \eta\) we have nontrivial imaginary part. Thing is, an exact compliment, creates another solution for \(b < \eta\). Therefore \(0 < b < \eta\) is a branch cut in this solution--which is precisely what paulsen describes.
If we take the regular iteration for \( \eta_- < b < \eta\), then we have a holomorphic solution for all b in the Shell thron region. I conjecture, to extend this further in the complex plane, the branch cut at \(\eta\) happens exactly along \((\eta,\infty)\) in the same manner as above. And! since \(b = \eta\) is a second order branch cut; there are exactly two branches.
Either
\[
\begin{align}
\text{tet}_b\,\,\text{is holomorphic for}\,\, \infty > b &> \eta\\
\text{Or,}\,\,\text{tet}_b\,\,\text{is holomorphic for}\,\,0 < b&<\eta\\
\end{align}
\]
I really think it's that simple... SO that any attempt at analytically continuing \(\text{tet}_b\) must be one of these functions.
I think the beta method is perfect for \(b < \eta\), absolutely horrible for \(b > \eta\). And I think Paulsen has effectively described the branch for \(b > \eta\).
EDIT!
We'd be mapping the boundary of the beta method which is holomorphic in a neighborhood of \(b = \eta_-\). And the boundary of the beta method is just the regular iteration...
This is the function:
\[
\beta_{1,y-e}(0) + \tau_{1,y-e}(0) = \lim_{n\to\infty}\log_{e^{y-e}}^{\circ n}\beta(n)\\
\]
Note, that this is a super function \(F(y,s) = \beta_{1,y-e}(s) + \tau_{1,y-e}(s)\) of \(e^{(y-e)z}\), but it is not normalized. So \(F(y,0) \neq 1\). This is pretty far to the right, and I think the normalization constant \(c(y)\) is about \(\approx - 100\), so take that as you will. Then expanding this taylor series: you get:
Code:
Sexp_N(0)
%110 = 0.36698491117363877672721663889036427140716634489185 + 0.067038156317795688414145642928712067863356011434057*y + 0.015788949723154391584628254028311187882014825079397*y^2 + 0.0041732500488636647473346723301861502832007979678191*y^3 + 0.0011459347921234473333598611339121170532863168129544*y^4 + 0.00031476867339550127064948243443866001837616928057723*y^5 + 8.5072318247145072055698609932161009739260916044604 E-5*y^6 + 2.2374432213772923751098253741244116682509075394788 E-5*y^7 + 5.6494749928987432505628072706157652482517526794943 E-6*y^8 + 1.3380162282210934932008768596833019954652054585800 E-6*y^9 + 2.8271647067221620349026752317592141733673371218931 E-7*y^10 + 4.5782369326730956198232693226135991428483582581450 E-8*y^11 + 1.0706283639846929456872517935334284429449163012380 E-9*y^12 - 3.7209922862647638466976072026018899711570878207273 E-9*y^13 - 2.3804108668865987255850412556911991176226702318575 E-9*y^14 - 1.0990613870367885885965122338055420048430927405073 E-9*y^15 - 4.4270717917421407005060176887999302486594809719354 E-10*y^16 - 1.6420540556980167862099599963815006461282570913371 E-10*y^17 - 5.7212973138058437545888802200908122898647252789916 E-11*y^18 - 1.8822902438305898279093919114254997857358304678610 E-11*y^19 - 5.8136654099698723637835151706994919977513209208185 E-12*y^20 - 1.6508073027300082024988150585139420629910610553383 E-12*y^21 - 4.0724145093088129822078012053607880946283617238331 E-13*y^22 - 7.1251929668546838418265100442155001392788688358523 E-14*y^23 + 3.7347972919919176307193175336700342767991716271126 E-15*y^24 + 1.2540388686288691464010160495653549342804217767942 E-14*y^25 + 8.7061527463918324888969041436431183116465950496535 E-15*y^26 + 4.6821044082449354479682640122729203615785293984399 E-15*y^27 + 2.2508198893754965115440519857348572846578092049803 E-15*y^28 + 1.0147935514919394409424198459406014252451646347882 E-15*y^29 + 4.3870970047275860700939572503402763114688266758039 E-16*y^30 + 1.8407778563643099459799220008265639288728692[+++]Which looks very promising, much better than \(b = e\) which starts to have sporadic jumps in the coefficients. It looks very regular and well behaved. Such that we can expect:
\[
F(y,s)\,\,\text{is holomorphic in}\,\,y\,\,\text{and}\,\,s\\
\]
But this would be restrictive about \(y \approx 0\), it would definitely start to diverge as we grow \(y\) too much, but it doesn't look too bad, in the sense that the domain should be non-trivial in which a super function exists.
We can also see further evidence of this when we graph the Weak Fatou set of \(y=-0.01\). The weak fatou set is precisely where the iterates of \(\beta\) are normal. This is precisely where a holomorphic superfunction can be constructed. As it seems to look, it doesn't diverge as much as I thought it would. I thought it'd be much more chaotic, but it seems relatively calm.
This is done over \(0 \le \Re(s) \le 2 \pi\) and \(0 \le \Im(s) \le \pi\). Where the white is spawned from the singularities.
The white areas are the weak Julia set, the black areas are the weak Fatou set. Black is good, white is bad, lol. There can still be discontinuities/misbehaviour in the black area, but they will always be measure zero, or trivial under an area measure, so that is very good. It's mostly black. The really suprising part, is in a graph I'm not posting. Though you can't see it when \(y = -0.1\), everything is black. Which means we are getting less chaos as we get further from Shell-Thron (at least on the real line). But not too bad. If you do the same process from \(\eta\) and grow past \(\eta\) everything INSTANTLY becomes entirely white, because the weak julia set is all of \(\mathbb{C}\), just like how the Julia set of \(b^z\) is all of \(\mathbb{C}\) for \(b > \eta\). Again, I think there's many more calming features to \(\eta_-\) that \(\eta\) doesn't have. And I think primarily it has to do with having four petals as opposed to two (second order neutral fixed point (multiplier = \(-1\))). This goes hand in hand with the periodic points which appear near by--\(b=e\) has no periodic points near \(\mathbb{R}\), and I think \(b < e^{-e}\) having said periodic points saves our ass.
I'll make a large graph of this tetration, but it's very slow going to complex graph these things, so I'm putting it off til later. You have to better understand the domains, and the precision needed, especially because beta.gp isn't the best program (it's definitely no fatou.gp). I'm making more plots for now.
Here are the functions:
\[
F(0,x),\,F(-0.01,x),\,F(-0.1,x)\\
\]
There is no obvious break in the sinusoid. Actually it's pretty damn calm. Now this is just the super function, again, not normalized, and fairly far out, the normalization constant is like \(c(y) \approx -100\)--where \(F(y,c(y)) = 1\). All of these graphs are made from the taylor approximation I posted before. Nothing chaotic is happening. The highest most one is \(y = 0\) and then \(y = -0.01\) and then \(y = -0.1\).
Analytically continuing between each should not be a problem at all. In fact, here is a graph of \(F(y,0)\) for \(y \in (-0.5,0.5)\)--which is again done through the same Taylor approximation as above (it's definitely an analytic expansion).
This is essentially a graph of \(b \in (\eta_- - 0.5,\eta_- + 0.5\)) of a super function \(F_b(x_0)\). Everything here is analytic and works in the complex plane. But I know you said you don't like complex graphs Bo.
I'm starting to be overwhelmed by evidence that we can construct a holomorphic abel function:
\[
\alpha(b,z)\\
\]
for \(|b-\eta_-| < \delta\) and \(\Re(z) > K \) (upto a branch cut) such that:
\[
\begin{align}
\alpha(b,b^z) &= \alpha(b,z) + 1\\
\alpha^{-1}(b,z + 2\pi i) &= \alpha^{-1}(b,z)\\
\exp_{b}^{\circ t}(z) &= \alpha^{-1}(b,\alpha(b,z) + t)\\
\end{align}
\]
So this wouldn't allow us to pass the regular iteration through the Shell thron boundary. But it would allow us to pass the \(\beta\) method through the shell thron boundary. Though only at \(b = \eta_-\). Even the \(\beta\) method has a clear branching problem at \(\eta\), which I thought was the end of the beta method.
Now, Bo, I know you love regular iteration method... You can always recover the regular iteration method through the beta method though. For example. With \(\sqrt{2}^z\), you can make the regular iteration about \(2\) which is \(-2\pi i/\log\log 2\) periodic. The \(\beta\) method can make tetrations holomorphic for \(2\pi i/\lambda\) periodic, so long as \(\Re(\lambda) > - \log \log 2\). But!!!!!!! as Sheldon, himself pointed out, limiting \(\lambda \to - \log \log 2\), we converge towards the regular iteration... So in a sense, the regular iteration becomes a boundary value of the beta iteration.
THIS MEANS THAT WE CAN THINK OF ETA MINOR AS A SUEZ CANAL OF SORTS!!!
To construct the regular iteration (which is kneser's iteration) about \(b = \eta_- - 0.5\), we can think of taking the beta iteration at this value but let \(\lambda \to \infty\), this will absolutely converge. When we think of the regular iteration (which is just the regular iteration this time) about \(b = \eta_- + 0.5\) we can think of \(\lambda \to c\) which produces the regular iteration period \(2 \pi i/c\).
I don't mean to self aggrandize, Bo. And I hate to be that kind of person. But I think we can talk about making a passage in the shell thron region about eta minor--and I think the beta method is telling us this. In that, making sure the regular iteration can break that boundary--maybe the beta method can help.
Also, I think this would work as a tetration which has its branch cut PRECISELY along \(b \in (-\infty,0) \cup (\eta,\infty\)\). So this method will not be reconcilable with the Kneser method for \(b > \eta\), along this line we'll see our branch cut in \(b\). This might be a different kind of tetration. But it would take the regular iteration along \(b = \sqrt{2}\), and not take the Kneser iteration with non zero imaginary part (which Sheldon always hated in his program, at least, as we talked about it).
I truly believe that eta minor is going to be something like the root 2 iteration about 4, instead about 2, and it'll produce something very different--while being 10 digits the same.
THIS IS FURTHER SUPPORTED BY RIEMANN
If I take a regular iteration at \(b > \eta\) and as I let \(b\to\eta\) we have a second order branching problem, Where upon as we continue \(b < \eta\) we have nontrivial imaginary part. Thing is, an exact compliment, creates another solution for \(b < \eta\). Therefore \(0 < b < \eta\) is a branch cut in this solution--which is precisely what paulsen describes.
If we take the regular iteration for \( \eta_- < b < \eta\), then we have a holomorphic solution for all b in the Shell thron region. I conjecture, to extend this further in the complex plane, the branch cut at \(\eta\) happens exactly along \((\eta,\infty)\) in the same manner as above. And! since \(b = \eta\) is a second order branch cut; there are exactly two branches.
Either
\[
\begin{align}
\text{tet}_b\,\,\text{is holomorphic for}\,\, \infty > b &> \eta\\
\text{Or,}\,\,\text{tet}_b\,\,\text{is holomorphic for}\,\,0 < b&<\eta\\
\end{align}
\]
I really think it's that simple... SO that any attempt at analytically continuing \(\text{tet}_b\) must be one of these functions.
I think the beta method is perfect for \(b < \eta\), absolutely horrible for \(b > \eta\). And I think Paulsen has effectively described the branch for \(b > \eta\).
EDIT!
We'd be mapping the boundary of the beta method which is holomorphic in a neighborhood of \(b = \eta_-\). And the boundary of the beta method is just the regular iteration...

