Hey, Mphlee
I'm going to keep it simple here on out. We don't need the beta method at all, I thought you would, but that just overcomplicates things. I was mostly using beta because I didn't have an efficient programming method for the Schroder case, now I do. I'm working on writing a much clearer write up, but for the moment I'll be brief.
The fact that it's a plane is absolutely AMAZING. And I'll explain how as to the best of my ability. To begin, let's only focus on real values. Let's ignore everything complex (but we're still gonna be analytic, no infinite differentiable/continuous bs, everything is still analytic).
For \(x,y > e\) and \(0 \le s \le 2\), begin by defining the operator:
\[
x\,[s]\,y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]
Now, this only uses one type of iteration because of our restrictions on \(x\) and \(y\). It only uses the repelling iteration of the base value \(b = y^{1/y}\). So for example \(b = \sqrt{2}\), then we're only doing the iteration about \(4\). This helps us with the problem we may have with mixing iterations (sometimes repelling (about 4), sometimes attracting (about 2)). It speaks to the fact that the real trouble value is going to be about \(y=e\). Which I mean, is kinda cool tbh. So we're only using the unbounded iteration.
To explain this difference, I suggest Trapmann's and Kouznetsov's paper on the different types of iteration of \(\sqrt{2}\).
Portrait_of_the_four_regular_super-exponentials_to.pdf (Size: 1.01 MB / Downloads: 536)
They use \(\sqrt{2}\) as an example, but it works for all \(b = y^{1/y}\) for \(y > 1\), and we are choosing what they call the repelling iteration, or the unbounded iteration. Basically goes to \(4\) at \(s = -\infty\), goes to \(\infty\) at \(s = \infty\).
So this means, our definition of \(\exp^{\circ s}_{y^{1/y}}(z)\) is just the Schroder iteration:
\[
\Psi^{-1}(\log(y)^s\Psi(z))
\]
Where \(\Psi\) is the Schroder function about \(y\) (which is the repelling fixed point because \(y > e\)). So again, if \(b = \sqrt{2}\), this Schroder function would be about the fixed point \(4\).
Now, we are making the intermediary operator:
\[
x \,\langle s\rangle_{\varphi}\, y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right) = x\,[s]\,y + \mu(x,y,s) \varphi + \mathcal{O}(\varphi^2)\\
\]
The value \(\mu > 0\), but that's not too important at the moment. What's important is the surface.
If we define a surface:
\[
F(\varphi_1,\varphi_2,\varphi_3) = x\,\langle s\rangle_{\varphi_1} \left(x \,\langle s+1\rangle_{\varphi_2}\, y\right) - \left(x\,\langle s+1\rangle_{\varphi_3}\,y+1\right) = 0
\]
Then, you'd expect this to be some whacky looking surface, right? Well, no, it's literally a plane (upto a small error). So literally, the surface \(F \subset \mathbb{R}^3\), looks something like this:
\[
F \approx a \varphi_1+b\varphi_2+c\varphi_3 = 0\\
\]
Not just locally either (which is always possible with tangent planes and yada yada), it looks like this GLOBALLY!
SO QUITE LITERALLY:
\[
0 = x\,\langle s\rangle_{\varphi_1} \left(x \,\langle s+1\rangle_{\varphi_2}\, y\right) - \left(x\,\langle s+1\rangle_{\varphi_3}\,y+1\right) \approx a \varphi_1+b\varphi_2+c\varphi_3 = 0
\]
This means, we're trying to solve very very difficult equations in \(\varphi\), yes... but everything is linear! This is going to be so much easier than I thought! We're solving linear equations upto a small error, holy hell.
The best part, is that they don't change much for varying \(x,y\) also. Here's a graph of the surface:
\[
F = 6\,\langle 0.5\rangle_{\varphi_1} \left(6 \langle 1.5 \rangle_{\varphi_2} 7\right) - \left(6 \langle 1.5 \rangle_{\varphi_3} 7\right) = 0
\]
IT LOOKS THE EXACT SAME!
So not only is it a plane, but the plane barely moves as we move \(x,y,s\)!
So this is the surface, which evolves over time as we move \((x,y,s)\), but stays pretty stable. Now we are asking for restrictions on \(\varphi_1,\varphi_2,\varphi_3\). We have to say that:
\[
\varphi_2(y+1,s) = \varphi_3(y,s)\\
\]
And something a bit more tricky for \(\varphi_1\), and this will single out a point for fixed \((x,y,s)\). Now this is still difficult. But we're basically just doing this in a linear setting! So it shaves off like 80% of the work I thought I'd have to do, lmaoo. Now I can just look at this like it's linear, and it'll at least be a great approximation!!!
I'm too excited, Mphlee! I'm writing up a new write up, which cuts all the fat from the discussion. It'll be quick. And I'm only going to focus on \(x,y > e\) and \(0 \le s \le 2\). This way there's no Riemann surfaces, it's just surfaces in \(\mathbb{R}^3\) which is inconceivably easier, lol.
As to Fiber bundles, we have a saying in toronto. Miss me with that shit!
(which means, I don't want anything to do with that, get that shit away from me)
But you're probably right, this definitely has to do with vector bundles and tangent space bs, I ain't got the energy for that.
I'm going to keep it simple here on out. We don't need the beta method at all, I thought you would, but that just overcomplicates things. I was mostly using beta because I didn't have an efficient programming method for the Schroder case, now I do. I'm working on writing a much clearer write up, but for the moment I'll be brief.
The fact that it's a plane is absolutely AMAZING. And I'll explain how as to the best of my ability. To begin, let's only focus on real values. Let's ignore everything complex (but we're still gonna be analytic, no infinite differentiable/continuous bs, everything is still analytic).
For \(x,y > e\) and \(0 \le s \le 2\), begin by defining the operator:
\[
x\,[s]\,y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]
Now, this only uses one type of iteration because of our restrictions on \(x\) and \(y\). It only uses the repelling iteration of the base value \(b = y^{1/y}\). So for example \(b = \sqrt{2}\), then we're only doing the iteration about \(4\). This helps us with the problem we may have with mixing iterations (sometimes repelling (about 4), sometimes attracting (about 2)). It speaks to the fact that the real trouble value is going to be about \(y=e\). Which I mean, is kinda cool tbh. So we're only using the unbounded iteration.
To explain this difference, I suggest Trapmann's and Kouznetsov's paper on the different types of iteration of \(\sqrt{2}\).
Portrait_of_the_four_regular_super-exponentials_to.pdf (Size: 1.01 MB / Downloads: 536)
They use \(\sqrt{2}\) as an example, but it works for all \(b = y^{1/y}\) for \(y > 1\), and we are choosing what they call the repelling iteration, or the unbounded iteration. Basically goes to \(4\) at \(s = -\infty\), goes to \(\infty\) at \(s = \infty\).
So this means, our definition of \(\exp^{\circ s}_{y^{1/y}}(z)\) is just the Schroder iteration:
\[
\Psi^{-1}(\log(y)^s\Psi(z))
\]
Where \(\Psi\) is the Schroder function about \(y\) (which is the repelling fixed point because \(y > e\)). So again, if \(b = \sqrt{2}\), this Schroder function would be about the fixed point \(4\).
Now, we are making the intermediary operator:
\[
x \,\langle s\rangle_{\varphi}\, y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right) = x\,[s]\,y + \mu(x,y,s) \varphi + \mathcal{O}(\varphi^2)\\
\]
The value \(\mu > 0\), but that's not too important at the moment. What's important is the surface.
If we define a surface:
\[
F(\varphi_1,\varphi_2,\varphi_3) = x\,\langle s\rangle_{\varphi_1} \left(x \,\langle s+1\rangle_{\varphi_2}\, y\right) - \left(x\,\langle s+1\rangle_{\varphi_3}\,y+1\right) = 0
\]
Then, you'd expect this to be some whacky looking surface, right? Well, no, it's literally a plane (upto a small error). So literally, the surface \(F \subset \mathbb{R}^3\), looks something like this:
\[
F \approx a \varphi_1+b\varphi_2+c\varphi_3 = 0\\
\]
Not just locally either (which is always possible with tangent planes and yada yada), it looks like this GLOBALLY!
SO QUITE LITERALLY:
\[
0 = x\,\langle s\rangle_{\varphi_1} \left(x \,\langle s+1\rangle_{\varphi_2}\, y\right) - \left(x\,\langle s+1\rangle_{\varphi_3}\,y+1\right) \approx a \varphi_1+b\varphi_2+c\varphi_3 = 0
\]
This means, we're trying to solve very very difficult equations in \(\varphi\), yes... but everything is linear! This is going to be so much easier than I thought! We're solving linear equations upto a small error, holy hell.
The best part, is that they don't change much for varying \(x,y\) also. Here's a graph of the surface:
\[
F = 6\,\langle 0.5\rangle_{\varphi_1} \left(6 \langle 1.5 \rangle_{\varphi_2} 7\right) - \left(6 \langle 1.5 \rangle_{\varphi_3} 7\right) = 0
\]
IT LOOKS THE EXACT SAME!
So not only is it a plane, but the plane barely moves as we move \(x,y,s\)!
So this is the surface, which evolves over time as we move \((x,y,s)\), but stays pretty stable. Now we are asking for restrictions on \(\varphi_1,\varphi_2,\varphi_3\). We have to say that:
\[
\varphi_2(y+1,s) = \varphi_3(y,s)\\
\]
And something a bit more tricky for \(\varphi_1\), and this will single out a point for fixed \((x,y,s)\). Now this is still difficult. But we're basically just doing this in a linear setting! So it shaves off like 80% of the work I thought I'd have to do, lmaoo. Now I can just look at this like it's linear, and it'll at least be a great approximation!!!
I'm too excited, Mphlee! I'm writing up a new write up, which cuts all the fat from the discussion. It'll be quick. And I'm only going to focus on \(x,y > e\) and \(0 \le s \le 2\). This way there's no Riemann surfaces, it's just surfaces in \(\mathbb{R}^3\) which is inconceivably easier, lol.
As to Fiber bundles, we have a saying in toronto. Miss me with that shit!
(which means, I don't want anything to do with that, get that shit away from me)
But you're probably right, this definitely has to do with vector bundles and tangent space bs, I ain't got the energy for that.

