![]() |
|
Iterating at eta minor - Printable Version +- Tetration Forum (https://tetrationforum.org) +-- Forum: Tetration and Related Topics (https://tetrationforum.org/forumdisplay.php?fid=1) +--- Forum: Mathematical and General Discussion (https://tetrationforum.org/forumdisplay.php?fid=3) +--- Thread: Iterating at eta minor (/showthread.php?tid=1601) |
RE: Iterating at eta minor - bo198214 - 07/25/2022 And regarding the photo bombing in N=3 - the fixed point structure already of \(f^{\circ 2}\) seems to be much more intricate than that of \(f(z)=b^z\). And I thought already this was complicated! RE: Iterating at eta minor - JmsNxn - 07/25/2022 Well the periodic points can actually be seen in my graph! If you see the sinusoidal curve, you're basically just finding the points \(\text{tet}_{\eta_-}(z_0)\) such that \(\text{tet}_{\eta_-}(z_0 + 2)\) are the same value. And again, since this is very sinusoidal, you're bound to get many of these points of arbitrary period \(n\). And additionally, since the sinusoid is small and restrictive about \(1/e\), we can expect the periodic points to sort of "coallesce" about the fixed point (\(1/e\)). Iterating about periodic points is pretty standard for the repelling case, (a bit more complicated with neutral). The thing is, that the local iterations can never contain the periodic points. If you define an Abel function \(\alpha\), then \(\alpha(f^{\circ 2}(z_0)) = \alpha(z_0) = \alpha(z_0) + 2\) so that we must have \(z_0\) is a pole. You can see this in my graph too, where the function is not injective, and therefore no inverse function exists on the real line and it blows up precisely at these points. If you take the Schroder route, than we get the same problem as with root two about the attracting or repelling fixed point. You can do an iteration about \(z_0\), but then it disagrees with an iteration about any other periodic/repelling point. So for example, if I take: \[ f^{\circ 2}(z) = z_0 + \lambda^2(z-z_0) + O(z-z_0)^2\\ \] For \(f(z) = \eta_-^z\), then we get a schroder function (we know that \(|\lambda| \neq 0,1\)), \[ \Psi(f^{\circ 2}(z)) = \lambda^2 \Psi(z)\\ \] Then there exists exactly TWO squareroot functions: \[ g(z) = \Psi^{-1}(\pm\lambda\Psi(z))\\ \] in a neighborhood of \(z_0\). NEITHER OF THESE SQUARE ROOT FUNCTIONS ARE \(\eta_-^z\). That's simply because \(\eta_-^z\) doesn't have a fixed point at \(z_0\). This goes for any local iteration about periodic points, you cannot, for lack of a better word, have both \(z_0\) and \(f(z_0)\) be in one iteration. To over explain, if \(f^{\circ s}(z) : \mathcal{A} \to \mathcal{A}\) for some domain \(\mathcal{A}\) such that \(z_0,z_1 \in \mathcal{A}\), then: \[ f^{\circ 2k}(z_0) = z_0\,\,\,\text{and}\,\,f^{\circ 2k+1}(z_0) = f(z_0)\\ \] This implies two damning things. Which is pretty much exactly what happens with the square root two case. We have an iteration about \(2\), and we have an iteration about \(4\), we can't have an iteration about both at the same time. If we did, we'd be crossing from an attracting fixed point's immediate basin, into the Julia set... And no such iteration can exist. In this case either \(\log_{\eta_-}\) is attracting at both \(z_0,z_1\) or \(f\) is, and therefore there's a Julia set between them. So in this case, it would imply that the julia set \(\mathcal{J}\) of \(f\) intersects with \(\mathcal{A}\). Now we may get lucky and \(\mathcal{A}\) is all Julia set, but that still poses a very big problem--the Julia set of \(\log_{\eta_-}\) separates the basin about both, and will therefore make a discontinuity as we move \(z\) from \(z_0 \to z_1\). \(f^{\circ s}\) can only be holomorphic on \(\mathcal{A}\) excluding the periodic points. This is again, because the Abel function is undefined here in \(z\). We'd end up with \(f^{\circ s + \alpha(z)}(1)\) which is a singularity at these periodic points. Essentially, LOCAL iteration is precisely that, local to a fixed point/ periodic point. And cannot include other fixed points/periodic points, without producing a discontinuity. Exactly like with \(\sqrt{2}\) about \(2\) and \(4\). AFAIK There is no way to avoid this though, even through Kneser. Kneser (for example base \(e\)), does not produce an iteration about ANY PERIODIC OR FIXED POINT. It is singular at every one. (I mean this as \(\exp^{\circ s}(z) = \text{tet}_K(s+\text{slog}_K(z))\) isn't holomorphic at any fixed/periodic point of \(\exp\)) Though it does have a regular looking structure at \(L^{\pm}\), but on different domains depending on \(s\), so the point still stands. There can be 1 fixed point, 0 fixed points, in an iteration, never more. So, In doing a Kneser iteration as you're proposing, we'd be iterating \(f^{\circ 2}(z)\) about the conjugate fixed points. This could absolutely produce an iteration \(F(s)\) which would be holomorphic in the upper/lower half plane. The trouble...? \(F(1/2 + F^{-1}(z)) \neq f(z)\). You'd be iterating \(f^{\circ 2}\) and you'd essentially lose all information about \(f\). Additionally, it wouldn't make sense because \(F\) would send to the periodic points in the upper/lower half planes, this is not a fixed point of \(\log_{\eta_-}(z)\), so asymptotically it's a bit meaningless. You would have to do a very very weird crescent iteration, that I can't even imagine, it would be very non-standard. And would ultimately not be subject to the \(lim_{\Im(z) \to \pm \infty}\) condition we're used to seeing, as this would mean we're iterating \(f^{\circ 2}\). RE: Iterating at eta minor - JmsNxn - 07/25/2022 So I tried something a little whacky, I'll see if some interesting graphs spawn. I'm trying to observe how: \[ \beta_{y-e,1} = \Omega_{j=1}^\infty \frac{e^{(y-e)z}}{e^{j-s} + 1}\,\bullet z\\ \] Behaves under iterated logarithms. This is the \(2 \pi i\) periodic function such that: \[ e^{(y-e)\beta_{y-e,1}(s)}/(1+e^{-s}) = \beta_{y-e,1}(s+1)\\ \] This object is holomorphic everywhere in \(s\) excepting singularities at \(j \pm (2k+1) \pi i\) for \(j \ge 1\) and \(k \in \mathbb{Z}\). and entire in \(y\) I know this object converges under iterated logarithms for \(y \ge 0\), but I haven't really looked at what happens if we move \(y\) more to the left and leave Shell thron. If it has domains of normality near zero, it could mean that we may be able to massage a nontrivial solution in the neighborhood of \(b = \eta_-\). I'll see what happens. RE: Iterating at eta minor - JmsNxn - 07/26/2022 Alright, so the Taylor series seem to be non-divergent, but a little slow. This is the function: \[ \beta_{1,y-e}(0) + \tau_{1,y-e}(0) = \lim_{n\to\infty}\log_{e^{y-e}}^{\circ n}\beta(n)\\ \] Note, that this is a super function \(F(y,s) = \beta_{1,y-e}(s) + \tau_{1,y-e}(s)\) of \(e^{(y-e)z}\), but it is not normalized. So \(F(y,0) \neq 1\). This is pretty far to the right, and I think the normalization constant \(c(y)\) is about \(\approx - 100\), so take that as you will. Then expanding this taylor series: you get: Code: Sexp_N(0)Which looks very promising, much better than \(b = e\) which starts to have sporadic jumps in the coefficients. It looks very regular and well behaved. Such that we can expect: \[ F(y,s)\,\,\text{is holomorphic in}\,\,y\,\,\text{and}\,\,s\\ \] But this would be restrictive about \(y \approx 0\), it would definitely start to diverge as we grow \(y\) too much, but it doesn't look too bad, in the sense that the domain should be non-trivial in which a super function exists. We can also see further evidence of this when we graph the Weak Fatou set of \(y=-0.01\). The weak fatou set is precisely where the iterates of \(\beta\) are normal. This is precisely where a holomorphic superfunction can be constructed. As it seems to look, it doesn't diverge as much as I thought it would. I thought it'd be much more chaotic, but it seems relatively calm. This is done over \(0 \le \Re(s) \le 2 \pi\) and \(0 \le \Im(s) \le \pi\). Where the white is spawned from the singularities. The white areas are the weak Julia set, the black areas are the weak Fatou set. Black is good, white is bad, lol. There can still be discontinuities/misbehaviour in the black area, but they will always be measure zero, or trivial under an area measure, so that is very good. It's mostly black. The really suprising part, is in a graph I'm not posting. Though you can't see it when \(y = -0.1\), everything is black. Which means we are getting less chaos as we get further from Shell-Thron (at least on the real line). But not too bad. If you do the same process from \(\eta\) and grow past \(\eta\) everything INSTANTLY becomes entirely white, because the weak julia set is all of \(\mathbb{C}\), just like how the Julia set of \(b^z\) is all of \(\mathbb{C}\) for \(b > \eta\). Again, I think there's many more calming features to \(\eta_-\) that \(\eta\) doesn't have. And I think primarily it has to do with having four petals as opposed to two (second order neutral fixed point (multiplier = \(-1\))). This goes hand in hand with the periodic points which appear near by--\(b=e\) has no periodic points near \(\mathbb{R}\), and I think \(b < e^{-e}\) having said periodic points saves our ass. I'll make a large graph of this tetration, but it's very slow going to complex graph these things, so I'm putting it off til later. You have to better understand the domains, and the precision needed, especially because beta.gp isn't the best program (it's definitely no fatou.gp). I'm making more plots for now. Here are the functions: \[ F(0,x),\,F(-0.01,x),\,F(-0.1,x)\\ \] There is no obvious break in the sinusoid. Actually it's pretty damn calm. Now this is just the super function, again, not normalized, and fairly far out, the normalization constant is like \(c(y) \approx -100\)--where \(F(y,c(y)) = 1\). All of these graphs are made from the taylor approximation I posted before. Nothing chaotic is happening. The highest most one is \(y = 0\) and then \(y = -0.01\) and then \(y = -0.1\). Analytically continuing between each should not be a problem at all. In fact, here is a graph of \(F(y,0)\) for \(y \in (-0.5,0.5)\)--which is again done through the same Taylor approximation as above (it's definitely an analytic expansion). This is essentially a graph of \(b \in (\eta_- - 0.5,\eta_- + 0.5\)) of a super function \(F_b(x_0)\). Everything here is analytic and works in the complex plane. But I know you said you don't like complex graphs Bo. I'm starting to be overwhelmed by evidence that we can construct a holomorphic abel function: \[ \alpha(b,z)\\ \] for \(|b-\eta_-| < \delta\) and \(\Re(z) > K \) (upto a branch cut) such that: \[ \begin{align} \alpha(b,b^z) &= \alpha(b,z) + 1\\ \alpha^{-1}(b,z + 2\pi i) &= \alpha^{-1}(b,z)\\ \exp_{b}^{\circ t}(z) &= \alpha^{-1}(b,\alpha(b,z) + t)\\ \end{align} \] So this wouldn't allow us to pass the regular iteration through the Shell thron boundary. But it would allow us to pass the \(\beta\) method through the shell thron boundary. Though only at \(b = \eta_-\). Even the \(\beta\) method has a clear branching problem at \(\eta\), which I thought was the end of the beta method. Now, Bo, I know you love regular iteration method... You can always recover the regular iteration method through the beta method though. For example. With \(\sqrt{2}^z\), you can make the regular iteration about \(2\) which is \(-2\pi i/\log\log 2\) periodic. The \(\beta\) method can make tetrations holomorphic for \(2\pi i/\lambda\) periodic, so long as \(\Re(\lambda) > - \log \log 2\). But!!!!!!! as Sheldon, himself pointed out, limiting \(\lambda \to - \log \log 2\), we converge towards the regular iteration... So in a sense, the regular iteration becomes a boundary value of the beta iteration. THIS MEANS THAT WE CAN THINK OF ETA MINOR AS A SUEZ CANAL OF SORTS!!! To construct the regular iteration (which is kneser's iteration) about \(b = \eta_- - 0.5\), we can think of taking the beta iteration at this value but let \(\lambda \to \infty\), this will absolutely converge. When we think of the regular iteration (which is just the regular iteration this time) about \(b = \eta_- + 0.5\) we can think of \(\lambda \to c\) which produces the regular iteration period \(2 \pi i/c\). I don't mean to self aggrandize, Bo. And I hate to be that kind of person. But I think we can talk about making a passage in the shell thron region about eta minor--and I think the beta method is telling us this. In that, making sure the regular iteration can break that boundary--maybe the beta method can help. Also, I think this would work as a tetration which has its branch cut PRECISELY along \(b \in (-\infty,0) \cup (\eta,\infty\)\). So this method will not be reconcilable with the Kneser method for \(b > \eta\), along this line we'll see our branch cut in \(b\). This might be a different kind of tetration. But it would take the regular iteration along \(b = \sqrt{2}\), and not take the Kneser iteration with non zero imaginary part (which Sheldon always hated in his program, at least, as we talked about it). I truly believe that eta minor is going to be something like the root 2 iteration about 4, instead about 2, and it'll produce something very different--while being 10 digits the same. THIS IS FURTHER SUPPORTED BY RIEMANN If I take a regular iteration at \(b > \eta\) and as I let \(b\to\eta\) we have a second order branching problem, Where upon as we continue \(b < \eta\) we have nontrivial imaginary part. Thing is, an exact compliment, creates another solution for \(b < \eta\). Therefore \(0 < b < \eta\) is a branch cut in this solution--which is precisely what paulsen describes. If we take the regular iteration for \( \eta_- < b < \eta\), then we have a holomorphic solution for all b in the Shell thron region. I conjecture, to extend this further in the complex plane, the branch cut at \(\eta\) happens exactly along \((\eta,\infty)\) in the same manner as above. And! since \(b = \eta\) is a second order branch cut; there are exactly two branches. Either \[ \begin{align} \text{tet}_b\,\,\text{is holomorphic for}\,\, \infty > b &> \eta\\ \text{Or,}\,\,\text{tet}_b\,\,\text{is holomorphic for}\,\,0 < b&<\eta\\ \end{align} \] I really think it's that simple... SO that any attempt at analytically continuing \(\text{tet}_b\) must be one of these functions. I think the beta method is perfect for \(b < \eta\), absolutely horrible for \(b > \eta\). And I think Paulsen has effectively described the branch for \(b > \eta\). EDIT! We'd be mapping the boundary of the beta method which is holomorphic in a neighborhood of \(b = \eta_-\). And the boundary of the beta method is just the regular iteration... RE: Iterating at eta minor - bo198214 - 07/26/2022 (07/25/2022, 08:19 PM)JmsNxn Wrote: Iterating about periodic points is pretty standard for the repelling case, (a bit more complicated with neutral). The thing is, that the local iterations can never contain the periodic points.OMG, this is really a very long (and cumbersome to read, because it was not clear to me what "local iteration" means - you mean regular iteration, right? - also you spoke about the case \(b=\eta_-\) while I spoke about perturbed \(b\) and the resulting fixed points coming into existence - which in the end makes no difference though, just making it more confusing) explanation for: For a regular iteration at \(z_0\) we have \(g^{\circ t}(z_0)=z_0\) and for the crescent iteration we have at least \(\lim_{z\to z_i}g^{\circ t}(z)=z_i, i=1,2\), therefore iterating \(g=f^{\circ 2}\) at non-fixed/periodic point(s) of \(f\) would would give \(z_0=g^{\frac{1}{2}}(z_0)\neq f(z_0)\). I never considered periodic points, so I was not aware of this drawback. Btw. its not *a* "crescent iteration" its *the* "crescent iteration" as in perturbed Fatou coordinates. Not some method that works somehow on some fundamental region, but a clearly defined (not numerically though) and uniqueness proven iteration. If I understood that correctly your beta method is also approaching the crescent iteration (like Sheldons, Kouznetsovs and Paulsens)? Its just a different way of computing it? Quote:Well the periodic points can actually be seen in my graph! Several issues trouble me here. I thought there would be no real half-iterate of \(\eta_-^z\). So how can it be that the super function is real valued? And second: You mean not an exact periodic point and hence not an exact period, but just influences of periodic points on your graph? (Because there are no real periodic points of \(\eta_-^z\)). RE: Iterating at eta minor - JmsNxn - 07/28/2022 Okay, couple things to clear up. I probably should've been clearer. By local iteration, yes I mean the regular iteration, but I sort of mean, only to be considered locally. So when I say "local iteration" I'm trying to emphasize we're only considering \(|z-p| < \delta\) is small, where \(p\) is the fixed point. But yes, the ONLY local iteration is the regular iteration, so we're on the same page. I'll stick to saying regular iteration locally--where I mean we're using the regular iteration, but only care about it locally about the fixed point . My apologies, I tend to do poorly at adopting vernacular.Sorry, yes, THE crescent iteration! I'm well aware it's the only one. That's the conjecture! The beta iteration is looking like it converges to the Crescent iteration (or for, cases like \(\sqrt{2}\) it converges to the regular iteration (in this case \(\lambda \to -\log\log(2)\)). I can show this but I can't show it for \(b = e\). But I can motivate. It's largely based on the fact that as you let \(\lambda \to 0\) (which moves the period of the iteration from \(2 \pi i/\lambda \to \infty\)) the iteration looks more and more like \(L^+\) in the upper half plane, and \(L^-\) in the lower half plane, and the branching seems to calm down A LOT. And due to this, the taylor series coefficients start to calm down A LOT--and they start to no longer have a trivial area of convergence (which is what normally happens). But still, very conjectural. This is for the \(b = e\) case, but a similar analysis works for \(b > \eta\). It's also a horrible numerical method though, so I've largely abandoned proving it, or even further justifying through numerical evidence. Trying to limit \(\lambda \to 0\) is a god awful exercise in futility. There can certainly be a real superfunction of \(\eta_-^z\); but it will not be the regular iteration, as it will not be local (you cannot find a function \(f^{\circ t}(z)\) which is real valued and holomorphic for \(z \approx 1/e\)). So there is no real valued half iterate here--actually wouldn't be surprised if you are correct, there is no real valued half-iterate. But this is largely because the function is not injective for the most part, and as you pointed out no fundamental domain. And to make a half iterate, you would need an abel function. It would be like trying to write \(\sin(1/2 + \arcsin(x))\) which is a very flawed looking half iterate of \(\sin(1+\arcsin(x))\). Doubt you'd be able to even find a domain that this could be called an iterate on. The beta method is good at one thing, and one thing only, making super functions that are real valued with arbitrary period \(2 \pi i/\lambda\), lol. But I for the life of me can't think of why, a priori, you'd have that there can't be a super function that is real valued. Naturally, if you wanted it to be injective, you'd have to have it be complex valued, just like how \((-1)^x\) is complex valued. But also remember, that \(\eta_-\) iterated using the beta method is less than desirable (unless we let \(\lambda \to 0\), which is hard to do numerically); it tends to have a good amount extraneous branches (that look like little fractals), but still it is analytic on the real line. I've run Sheldon's test for analycity on a large sample size, with A bunch of points in Shell Thron, and it all works out the same. Sheldon concurred for the most part--and he helped carve out much of the limits to how we can move \(\lambda\), and for what \(\lambda\) the super functions converge. If your curious, the superfunction converges for \(\Re(\lambda) > -\log |\log(b)p|\), where \(b^p = p\) is the primary fixed point and \(b\) within the shell thron. Outside of Shellthron it becomes more nefarious and difficult--and I'm still not clear exactly how it works. OH yes! Sorry, the periodic points aren't real valued, they are really close to the real line, they have small imaginary part though. I guess I screwed that explanation up. I meant that since this function is sinusoidal and almost looks like \(\approx 2\) period, there's bound to be a periodic point somewhere in the neighborhood. My apologies. You're also bound to find \(N\)'th order ones. EDIT: I guess I should also add that if you choose the standard abel iteration you should have no critical points near \(z \approx p\) in any of the petals. Thereby, the superfunction also has no critical points. You can note instantly, sure we are real valued, but there are critical points everywhere in the above beta iteration of eta minor. So again, trying to invert and grab an abel function is near impossible; we should have singularities like crazy in the abel function. But again, this only says that there's no fractional iteration \(f^{\circ t}\) near the fixed point \(p\) that is real valued (no real valued regular iteration locally). So yes, no half iterate. This doesn't mean that there's no super function though. Think of it like this. Regular iteration--done through the Schroder map, is the only holomorphic iteration \(f^{\circ t}(z)\) about the fixed point \(z \approx p\). Crescent iteration--done through Kneser's riemann mapping, is a holomorphic iteration \(f^{\circ t}(z)\) where \(z\) is holomorphic on a domain with boundary value \(p\) beta iteration--done through infinite compositions, is only a holomorphic iteration \(f^{\circ t}(z_0)\), and there isn't necessarily an abel function (sometimes there is, sometimes there isn't). Sure you can locally invert this away from critical points, but no one ever said there aren't a hell of a lot of critical points. RE: Iterating at eta minor - JmsNxn - 07/28/2022 Hmmmmmmmmmm I think we can get a little trip up if we think of regular iteration, so I thought I'd do a run through of the non-neutral case to clarify things. To clarify things, Bo. The regular iteration for \(b \approx \eta_- + 0.1\) is not real valued, I presume we can agree. It has a fixed point \(p\) and a multiplier \(-1 < c < 0\). Therefore the regular iteration will be \[ f^{t}(z) = p + c^t(z-p) + O(z-p)^2\\ \] Now, the beta tetration can construct this. We do this by setting \(2 \pi i/\lambda = -2 \pi i/\log c\). So in a sense, if we match the period of the beta iteration to the period of the regular iteration, then the beta iteration is just the regular iteration. I liked to phrase this as tetration is unique upto its period. Any two tetrations \(F_1,F_2\) which have the same period, must be the same. So since you can create a beta iteration with this period, it must be the regular iteration. This is a pretty deep theorem, which I had to brush paths with elliptic function theory to properly describe. BUT! If I keep \(\lambda > 0\) and real valued, for example, if I set \(\lambda = 1\), then we can make the beta super function which is now real valued. This will make a similar graph as with \(\eta_-\), but now it will be within the interior of Shell Thron. So, the beta iteration can make a real valued iteration with a purely imaginary period; as opposed to the complex period of the regular iteration--which makes the regular iteration complex valued. EDIT: Also, if you want to think of this in terms of \(\theta\) mappings, we can think of it like this. The regular iteration looks like: \[ f^{\circ t}(z) = \Psi^{-1}(c^{t}\Psi^{-1}(z))\\ \] Then what we want is a theta mapping \(\theta(t+1) = \theta(t)\) but additionally: \[ \theta(t + 2 \pi i/\lambda) = \theta(t)- 2\pi i/\lambda\\ \] The theta I used is holomorphic almost everywhere on \(\mathbb{C}\), so there are singularities/branch cuts... so remember that. Then the beta iteration looks like: \[ f_\beta^{\circ t}(z) = \Psi^{-1}(c^{t + \theta(t)}\Psi^{-1}(z))\\ \] So that now it is \(2 \pi i/\lambda\) periodic. Obviously this is only a super function, it is not a fractional iteration in any sense because it doesn't satisfy \(f_\beta^{t_0}(f_\beta^{t_1}) = f_\beta^{t_0+t_1}\)--it only satisfies the super function identity. This plays pretty close to elliptic functions, and you can actually convert \(\theta\) into something that looks nearly like an elliptic function. In this sense, we can also think of instead of perturbing the base, or perturbing the multiplier--we are perturbing the period. EDIT: For the neutral case, the theta mapping has to be looked at like a mapping on the inverse abel function. So take \(b = \eta\), for example, which has a holomorphic super function for \(\mathbb{C}/(-\infty,-2]\). Then we find an almost elliptic mapping \(g\) such that: \[ \begin{align} g(z+1) &= g(z)+1\\ g(z+2\pi i/\lambda) &= g(z)\\ \end{align} \] Then taking \(\eta \uparrow \uparrow g(z)\), we have the beta iteration at \(\eta\). This is valid so long as \(\Re \lambda >0\). TO ALSO CLARIFY! THERE IS NO MEROMORPHIC FUNCTION \(g\) WHICH SATISFIES THESE EQUATIONS. BUT, big but, we can have a holomorphic \(g\) which is holomorphic on \(\mathbb{C}/B\) such that: \[ \int \int_{B}\, dxdy = \int_{B} \,dA = 0\\ \] So that \(g\) is holomorphic almost everywhere on a Lebesgue area measure; the lebesgue measure in \(\mathbb{R}^2\). This can equivalently be said that the boundary of \(\mathbb{C}/B\) is \(B\). Fuck! This is hard to get everything out, because I have to summarize so much from my report on the beta method. But we do not get a nice tetration \(F\) for \(\eta_-\) such that \(F : (-2,\infty) \to (-\infty, 1/e)\). We get a far stranger construction. But in the complex plane it still equals tetration almost everywhere. RE: Iterating at eta minor - JmsNxn - 07/29/2022 Forgive me,Bo--I tend to forget exact details. But you used the word tetrational for a super function \(F(z)\) such that \(b^{F(z)} = F(z+1)\) if \(F(0) = 1\). It seems that the iteration I have for eta minor, and nearby fixed points is NOT TETRATIONAL. To the extent that there is no normalization constant to make \(g(z) = F(z+z_0)\) s.t. \(g(0) = 1\). I think this speaks leagues as to your fundamental domain problem. We can create a real valued superfunction, BUT IT ISN'T A TETRATION. I did more and more extensive testing, and this seems pretty clear. Essentially we get an almost periodic sinusoidal wave along \(\mathbb{R}\). This is because this is an iteration about the repelling petals. As opposed to an iteration about the attracting petals (which include \(0\)). So we are effectively mapping the top/bottom petals of the four petals of \(\eta_-\) about \(1/e\). We're making a real valued solution, which has decay at the nearby attracting fixed points of \(\log_{\eta_-}\). This is very similar to Kneser's mapping theorem, except we've removed the mapping, but additionally, Kneser has no mapping, because no real valued super function exists which is TETRATIONAL. So this means, we're doing something entirely different. We are mapping along the top/bottom petals, which as we limit \(f^{-n}\) we go towards the repelling fixed points on the top/bottom of \(1/e\). And we are doing some kind of mapping, to ensure we paste these two solutions together along the real line. So this should be a huge disclaimer. The superfunction IS NOT TETRATIONAL. It's just a super-function. And it appears to like the repelling petal, as opposed to the attracting petals. So, just to clarify. THERE IS NO TETRATION \(\eta_- \uparrow \uparrow z\) that is real valued. But there is a super function!!!! So this is very much, more like an iteration of \(\sqrt{2}^z\) about \(4\), then it is like an iteration about \(2\). Which as you noted, to have a \(\eta_-\) iteration, it would be complex valued. Nonetheless, we can still construct the \(2\) iteration for \(\eta_-\), it's just a different complex period, which approaches \(\infty\) in a specific way. Rather than \(\Re(\lambda) \to 0\) along \(\mathbb{R}^+\), it's along another arc. God this is making my head spin. I'm thinking of writing a shorter version of beta.gp, which only deals with \(\eta_- + y\) for \(y\) small--so that you can see for yourself how this works. The code I have released does work, but it'll be a bit glitchy near eta minor. So I had to patch work a bunch of the protocols to make them more accurate near eta minor. Let me know if you'd be interested in a pari-gp program that would do that. That way you can see the analytic super function for yourself. RE: Iterating at eta minor - bo198214 - 07/31/2022 To be honest this is a bit too much to digest for me, and unfortunately in the next time I will not have time to further participate at the forum. Anyways, the stuff sounds really intriguing. Particularly about the period dial. So you can change the regular iteration at fixed point 2 (base \(\sqrt{2}\)) into the regular solution at fixed point 4, just by turning the period dial from \(2\pi i/\log(\log(2))\) to \(2\pi i/\log(\log(4))\) ?! Very interesting! And then turning it to \(\infty\) you would get the base-continued crescent/Paulsen iteration (i.e. something that is not real on the real line but nearly (supersmall imaginary part))? Then I have a question about this assertion: Quote:Iterating about periodic points is pretty standard for the repelling case, (a bit more complicated with neutral). The thing is, that the local iterations can never contain the periodic points.What exactly do you mean by iterating about the periodic points? Just want to make some remarks about the "Kneser iteration" - for me it is not interchangeable with the crescent iteration, but is a very special case. It means to finding a real analytic iteration if there is no real fixed point by using a conjugated fixed point pair. In this special case one can make use of the Riemann mapping theorem. If however we are not on the real line anymore, but just have two arbitrary fixed points with a fundamental region, Kneser's construction does not work anymore (or at least Paulsen and me don't see a way to generalize Kneser's construction) in that case Riemann's mapping theorem is not sufficient anymore, but one needs the *measurable* Riemann mapping theorem) Quote:the branch cut at \(\eta\) happens exactly along \((\eta,\infty)\)Dunno what you mean "happens" - isn't the branch cut something you set instead of something that happens? I also wonder why you only consider \(\eta_-\) to be the Suez canal, should it not behave like all the other parabolic fixed points which don't explode (in terms of creating more fixed points when perturbed). Did you try the beta method on non-real parabolic fixed points? I have already the conjecture that the fixed point of base \(\eta\) is the only indifferent fixed point where the regular iteration is not analytic. Some computations are quite supporting that the regular iteration powerseries at parabolic fixed points have convergence radius (while it is quite a difficult proof that the regular iteration powerseries of \(e^x-1\) and hence of \(\eta^x\) has zero convergence radius). I am not completely sure, but does this mean that all the Fatou-Coordinates in the petals are just one function? I also guess this has to do with the \(\eta\)-fixed point being the only "exploding" one. And maybe this in turn would lead to a proof that the Shell-Thron boundary is permeable except at \(\eta\). Do we btw have a strict proof that any iteration of a non-trivial function can never be holomorphic at two fixed points (except for integer iterates)? I think I saw you reasoning about that topic, but I was not sure how strict it was ... this would be surely worth a dedicated thread. (07/26/2022, 02:42 AM)JmsNxn Wrote: To construct the regular iteration (which is kneser's iteration) about \(b = \eta_- - 0.5\), we can think ... Öhm, I don't know how the Kneser iteration (in what sense ever) can be the same as the regular iteration .... please explain! RE: Iterating at eta minor - JmsNxn - 08/01/2022 (07/31/2022, 08:24 PM)bo198214 Wrote: To be honest this is a bit too much to digest for me, and unfortunately in the next time I will not have time to further participate at the forum. Anyways, the stuff sounds really intriguing. I apologize, I like to throw up everything I can think of. Just so I can see it written out, so that maybe when I look at it again something new will come to me. I am not the most concise writer, lol. Quote:Particularly about the period dial. So you can change the regular iteration at fixed point 2 (base \(\sqrt{2}\)) into the regular solution at fixed point 4, just by turning the period dial from \(2\pi i/\log(\log(2))\) to \(2\pi i/\log(\log(4))\) ?! Very interesting! Unfortunately no, to both of those examples. But there is a period dial. But this is largely because you chose \(\sqrt{2}\). If you take \(b =\sqrt{2}\) for example, then you can create a tetration with arbitrary period \(2 \pi i /\lambda\) so long as \(\Re(\lambda) > - \log \log(2) = 0.36651292058166432701243915823266946945426344783711\). So \(\log\log(4)\) nor \(0\) satisfies this. The period \(2 \pi i/\log\log2 \) is PRECISELY the boundary of the beta method. That's the biggest you can make the period. You can only make it smaller from here. The exact formula is if \(e^{\mu p} = p\) (at the primary fixed point in the Shell thron--\(b=e^{\mu}\) in S-T ) then we can make a tetration with period \(2 \pi i/\lambda\) so long as \(\Re(\lambda) > -\log|\mu p|\). This means on the boundary of Shell-Thron we can make \(\lambda \to 0\), which is why it's great. When you take \(b = e\) for instance, you can make \(\Re(\lambda) > 0\), and although the tetration only converges as an asymptotic series (hard to explain how this is derived), when you let \(\lambda \to 0\) THAT is converging to what looks like the Crescent iteration. So the crescent iteration seems to only be discoverable OUTSIDE of the Shell thron region. Inside, a natural boundary forms at regular iteration's period. This was a conjecture by Sheldon, and I spent months proving it, it didn't prove to be too hard, and largely just reduced into the convergence of a series. So essentially, the "dial" has a max setting, and within shell thron, the "max setting" is the regular iteration; on the boundary of shell thron, it is a period of \(i\infty\), and out side of Shell-Thron, it is again \(i\infty\). Outside of shell thron, when the dial hits the max setting, then it is the crescent iteration (? again it's a conjecture, but it's fairly well supported. Especially when you use more clever numerical procedures, like Tommy's Gaussian). Hope that makes sense... Quote:Then I have a question about this assertion: Sorry, I was just reaffirming what I had said before. We can take \(f(p_0) = p_1\) and \(f(p_1) = p_1\), then we can iterate \(f^{\circ 2}(p_0) = p_0\) using the regular iteration. That's all I meant. The thing is, this is an iterate of \(f^{\circ 2}\) and doesn't produce \(f\) in any form or shape. And yes, okay. I'll adjust my vernacular. The crescent iteration is the general idea, which I presume can be generalized to other functions etc etc... Kneser is just for the case \(b > \eta\). And it's specific to a real valued criterion. Gotcha! Quote:Quote:the branch cut at \(\eta\) happens exactly along \((\eta,\infty)\)Dunno what you mean "happens" - isn't the branch cut something you set instead of something that happens? Oh sorry, I meant I would choose the branch cut a long here. We'd probably have the same problem as with the crescent iteration, where \(\sqrt{2}\) has a non trivial imaginary part along the real line, and it's positive approaching from the top, and negative approaching from the bottom. I just meant that we could choose it to happen along \((\eta,\infty)\). But also, it's important to remember that \(x^{1/x}\) has a second order branch cut at \(e\), this means there are "two branches"--the highest order non zero taylor coefficient is 2. This is sort of the language you'll see in Riemann surface stuff. Similar to how \(x^2\) has "two branches" at zero \(\sqrt{x}, -\sqrt{x}\). I think we're going to see a similar problem at about \(\eta\), where there are actually two tetration branches here. Sure we can move the branch cut around and around and everything should be fine, but i'm not so sure. Maybe we can have real valued on \(\eta,\infty\), then it's not real valued on \(1,\eta\). Or we can have real valued on \((1,\eta)\) then it's not real valued on \((\eta,\infty)\). Which would imply a very different scenario than "just choosing where the branch cut is". It'd be comparable to adding \(2 \pi i\) to \(\log\). Quote:I also wonder why you only consider \(\eta_-\) to be the Suez canal, should it not behave like all the other parabolic fixed points which don't explode (in terms of creating more fixed points when perturbed). Did you try the beta method on non-real parabolic fixed points? Oh, I see it as the "suez canal" because it is real valued! That is the only reason why I think it's important. It's the only other real valued parabolic point. I did try the beta method on a few other parabolic points, and it works very similarly. It's still very chaotic; and the weak Julia set can become more troublesome, but it does work, again, a.e. under an area measure in \(\mathbb{C}\). Hmmm so if I'm understanding your question. Let's take \(e^{z}-1\), the left half plane is the attracting petal, and the right half plane is the repelling petal. So there are two different abel functions to the left or the right (eta or cheta iteration, basically). When the fixed point is parabolic but of order \(n\), so that the lyapunov multiplier is \(e^{2 \pi i k/n}\) where \((k,n) = 1\) (k and n are coprime), then, I'll double check milnor, but there should be 2n petals, n attracting, n repelling, and if I'm remembering correct, all the repelling petals abel functions are different versions of each other--and similarly for all the attracting. I forget how milnor describes this, I apologize but parabolic isn't my strong suit, but I do believe they are essentially the same function. Because: \[ \alpha(f^{\circ n}(z)) = \alpha(z) + 1\\ \] Has only two solutions, the repelling version and the attracting version (eta or cheta). But I'll need to double check this. Milnor can be a slow digest, and I can't remember all the details. EDIT: I forgot the fundamental lemma-- the union of all the petals about zero is a neighborhood (it contains a disk about zero). So when you talk about the repelling/attracting petals--you are talking about two different partitions of a disk about zero. Wherein, there are n branches about zero for the repelling case, and n branches about zero for the attracting case. No different than the n branches of \(x^{1/n}\). Or as \(\eta\) behaves; except you will see an x^2n type behaviour, as opposed to an x^2 kind of behaviour you see at \(\eta\). Quote:Do we btw have a strict proof that any iteration of a non-trivial function can never be holomorphic at two fixed points (except for integer iterates)? I think I saw you reasoning about that topic, but I was not sure how strict it was ... this would be surely worth a dedicated thread. I follow this from Milnor primarily. And I will try to dig up a solid proof. I know he has a few small remarks explaining why you can't have a coordinate change (as he calls it) about two fixed points. It's phrased as to normality though. Where in, a function can only have an Abel (I believe we call it Fatou coordinate, in this forum, and even in Milnor) coordinate change (or a Schroder coordinate change) about a fixed point. It's easier to explain with Schroder, but more difficult for Abel. WLOG we can assume that one \(p\) is attracting, and \(p'\) is in the Fatou set or in the Julia set. We can be assured of this by considering \(f^{-1}\) instead of \(f\). By which, A schroder coordinate change can only be valid on the Fatou set; by which \(p\) and \(p'\) are in disconnected domains, separated by a Julia set. Therefore there exists no Schroder coordinate change (and hence no holomorphic iteration about both). For an Abel coordinate change it's more technical. But the way Milnor describes how an abel function about a parabolic point is constructed is complete, in the sense that any Abel function will be of this form. It's possible to have it holomorphic in a half plane, so that the codomain is \(\mathbb{C}_{\Re(z) <> 0}\) depending on if we are in an attracting or repelling petal. Any other abel function is equivalent up to a riemann mapping (? this might be a bit more nuanced). So you can draw out that \(\pm\infty\) is the fixed point; and the boundary of the Half plane, is precisely the boundary of the petal. Note this only applies to parabolic fixed poionts, for abel functions like in Kneser's case, we are doing something more involved; but still we can map it to an abel function that fits these criteria, where now it's not a petal about the fixed point, but the fatou set about \(L^{\pm}\) (fatou set of \(\log\)). I do have a proof somewhere, I'll find it, but it is a feature of fractional iterations. It can be tricky though to observe it, especially because Milnor is a much more topological book; so it's sort of hidden in the sense that you can partition fixed points surrounded by Julia sets. Quote:(07/26/2022, 02:42 AM)JmsNxn Wrote: To construct the regular iteration (which is kneser's iteration) about \(b = \eta_- - 0.5\), we can think ... Oh, I seem to have misremembered here. Sorry, I meant that if you use Kouznetsov's interpretation of the regular iteration, then it is Kneser's iteration at \(b= e\). At least this is how Kouznetsov presents it in his text book. So I made the false equivalency "Regular iteration = Kouznetsov's regular iteration = Kneser's iteration" (which would follow from Paulsen's uniqueness claim). So I think I just used the wrong meaning of regular iteration, I apologize. I should've said it's just Kouznetsov's regular iteration--not Kneser's iteration. And Kouznetsov's regular iteration will agree with kneser on \(b > \eta\), according to Paulsen, which would be where my confusion came from. I apologize, misspoke here. |