(03/21/2022, 07:40 PM)Ember Edison Wrote: I've been playing "Elden Ring" for the past few days, so I haven't been on the forums to check the latest news. Anyway, fuck the war, fuck COVID, and peace to all of you.
And then yes congratulations on finally finishing your thesis!
I'm showing the conclusions of my cursory reading here, and I hope I haven't missed key information.
1. The Weak Fatou set is the complement of the Weak Julia set.
2. beta method is holomorphic on Weak Fatou set, and Weak Julia set is not.
3. beta method work well on Shell-Thron region (or at least its interior).
3.1. It can induce a unique Kneser's Tetration.
3.2. And weak Julia set is measure zero.
4. Every periodic solution to the tetration equation which is not the period of the regular iteration must have singularities in the right half plane other than the singularities of the regular iteration
And then I have some questions in case I missed something:
Q1. What happens to beta method if we close to the S-T region from the outside of the S-T region?
Q2. What happens to beta method when imag(base) < 0, In particular, minimum real part for Shell-Thron Region, and -1 ≤ base < 0
Q3. I did not find the beta method Super-logarithm in the paper.
Q4. Is the beta method not sufficiently holomorphic to induce the Super-root?
The last one is purely a question of entertainment:
Q6. What is the behavior of the \( \int \beta (z)dz \) and \( \int tet_{\beta} (z)dz \)?
Oh, I know a lot of people hate solving indefinite integral problems, I'm just kidding. Because all versions of Tetration's indefinite integrals don't look like they would be members of the Mellin/Fox H function.
Hey, Ember!
You've essentially got it. The \(\beta\) method is holomorphic on the weak Fatou set; which is similar to the Fatou set, but related to the \(\beta\) function. Essentially where the orbits of the \(\beta\) function \(\beta(s+n)\) escapes to infinity (weak Julia) and where it doesn't (weak Fatou). On the weak Julia set, you can still uncover arbitrary precision though (outside of the singularities), but it'll be nowhere analytic. Despite all the derivatives still converging pointwise, the power series diverges. I haven't proven we can induce Knesers, but all numerical evidence is pointing to yes. It's the same idea I've been saying in taking the limit \(\lim_{\lambda \to 0} \lim_{n\to\infty} \beta_\lambda(s+n)\). For regular iteration on the Shell-thron region (by regular iteration, I mean Schroder/Ecalle iteration), you can get it by limiting the period to the period of the regular iteration. You've gotten the main central points.
I'd add though, the thesis focuses a lot on the asymptotic angle, which provides three different asymptotic theorems for \(\beta\); each stronger than the last.
For Q1: If you take the boundary of the Shell-thron region it behaves very similarly to the interior of the Shell-Thron region; but it causes us to except a couple of the properties on the interior. For example, you can still construct tetration for \(\mu = 1/e, b = e^{1/e}\). The beauty of this is the fact the multiplier of the fixed point \(\omega = e\) is \(\mu \omega = 1\). And the beta method is holomorphic when \(\Re(\lambda) > -\log|\mu \omega|\)--which means \(\Re\lambda\) must be greater than zero... So clearly this condition implies tetration at base \(\mu = 1/e\) can be made for all periods.
To talk about the barebones, which is where there's a weak fatou set and where there isn't, is really all you have to do to uncover where these tetrations are holomorphic.
So take \(\Re\lambda > 0\), and take \(\beta_\lambda\); then everywhere \(\lim_{n\to\infty} \beta_\lambda(s+n) \to e\) is the weak Fatou set. Now you have to analyse the linearized form of \(\tau\) which is:
\[
\begin{align}
\tau^1(s) &= -e^{1-\lambda s}\\
\tau^{n+1}(s) &= \frac{e \tau^{n}(s+1)}{\beta(s+1)} - e^{1-\lambda s}\\
&= -\sum_{j=0}^n \frac{e^{j+1 - \lambda(s+j)}}{\prod_{c=1}^j \beta(s+c)}\\
\end{align}
\]
Where this series converges, is everywhere \(\Re \lambda > 0\), so tetration works everywhere on the weak Fatou set (excluding if there are branch cuts, or logarithmic singularities, but again they are \(\mathbb{R}^2\) measure zero). A similar event happens for all values on the border of the Shell-Thron region. The trouble is, it can be a little more chaotic. In this case, I very much doubt that the weak julia set is measure zero, it may be larger, and probably looks more like the weak Julia sets outside of the Shell-Thron region, which are much more robust. For \(\mu = 1, b = e\) for example it's all of \(\mathbb{C}\). This is because we have a neutral fixed point on the boundary; for that reason I didn't talk too much about the boundary, and lumped it in with the general case.
Q2:
So in the lower part of the Shell-Thron region, everything behaves in conjugacy to the upper part. This means that \(\mu \mapsto \mu^*\), then \(\beta_{\lambda,\mu^*}(s) = \beta_{\lambda^*,\mu}(s^*)^*\), and similarly with the tetration. It's very well behaved. Also, I spend a good amount of time double checking that everything worked the same for \(e^{-e} < b < 1\), and if it still produced a viable Shell-Thron tetration... it did. For \(0 < b < e^{-e}\), things get very whacky. It's important if you use the function to use large amount of polynomials/iterations to deal with these cases. And when you are at about \(1E-24\) or something ridiculous, use init_OFF, which uses a linear substitution so that the program can still initialize the polynomial. Everything works the same, but the Weak Julia set starts to dominate and we achieve an almost trivial area for the Weak Fatou set.
Q3:
Yes, I still have a mental block for the super logarithm. It is perfectly possible to create a super logarithm. The trouble is I have nothing new to say about it. I don't have a way of constructing it from scratch, other than by just doing a polynomial inversion. For that reason I didn't bother including anything about it, as it would ultimately just be invert the taylor series of the super exponential ...

Q4:
As to the super-root. Again, haven't given it much thought. I don't have anything clever to say about it at the moment, so I left it out...
Q5:
As I don't know what the indefinite integrals are, I thought I'd share a fact about infinite compositions for generating the further derivatives. For example, I'll do \(\beta'\).
\[
\begin{align}
\beta'(s+1) &= \frac{d}{ds} \frac{e^{\mu \beta(s)}}{1+e^{-\lambda s}}\\
&= \mu\beta'(s)\beta(s+1) + e^{\mu \beta(s)}\frac{\lambda e^{\lambda s}}{(1+e^{\lambda s})^2}\\
&= \mu\beta'(s)\beta(s+1) + \frac{\lambda \beta(s+1)}{1+e^{\lambda s}}\\
\end{align}
\]
Therefore,
\[
\beta'(s) = \Omega_{j=1}^\infty \beta(s+1-j)\left(\mu z+\frac{\lambda}{1+e^{\lambda (s-j)}}\right)\bullet z\\
\]
Because this is the unique function to satisfy the above functional equation and the limit \(\lim \beta'(s) = 0\) as \(s \to - \infty\). Similarly we can make a difference equation for \(\beta^{(n)}(s)\) in terms of \(\beta,\beta',...,\beta^{(n-1)}\). And the solution of which, must be a linear infinite composition (all that appears is one linear function in \(z\) in the infinite composition). This means \(\beta\) satisfies a delay differential equation, and so, integrating \(\beta\), would fall under the integral of a solution to a delay differential equation. You can actually brute force the integral \(\int\beta\) from here, but I really don't want to do it. If you're that interested though... I can begrudgingly write it up.
Lol! I hope you're having fun playing Elden Ring. Can't imagine what it's like in Europe right now. I agree, fuck covid and this nonsense war. Ffs! I'm not a big Elden Ring guy, but I've been playing Metroid Dread for 5 months straight, fucking love this game!
Sincere Regards, James
Point 1: I thought I'd describe how you can recover kneser. If you take \(\lambda = 0.001\) and \(\mu = 1, b = e\), and you graph near where its zero you get this:
All of the singularity problems start to cluster closer and closer to the real line. If I were to graph higher up here, you'll see a nice sheet of green with zero singularities or cuts. As you shrink the multiplier this becomes more and more drastic. Essentially the real line becomes an eventual border between the upper fixed point and the lower fixed point; and orbits of their neighborhoods. To get kneser you have to do a bit of mapping after this, but it is essentially this argument.
To get regular iteration, here's a side by side of \(\lambda =0.01 -\log\log(2)\), and the beta tetration next to the regular tetration for \(b = \sqrt{2}\):
Point 2: Here's a graph of \(\lambda = 1\) and \(\mu = 1/e, b = e^{1/e}\); you can see we still have a large area of holomorphy, looks pretty good to be honest.
And here's a graph testing the functional equation using just the taylor series:
It's certainly analytic on the weak fatou set.

