(05/02/2022, 07:25 PM)MphLee Wrote: First, I'd bet 100$ that those equations do not hold for non integer ranks... but they put in some way a constraint on the rank infinitesimally approaching integers ranks. So I hoped there could be a way to use those identities to force something on the ranks in \((0,\epsilon) \cup (1-\epsilon,1+\epsilon)\cup (2-\epsilon,2)\) - or regard this as a sum of open balls in \(\mathbb C\).
You are precisely correct. What you mean by sum of open balls, you are referring to the monodromy theorem. This is guaranteed for noninteger ranks, and there's at least one solution.
We can choose \(2 <s> 2 \neq 4\), but that reduces this case from 1 variable surface, to 2 variable surface, so we're still perfectly fine. If you assume that it's 4, then you are assuming that \(\varphi_2 = 0\) in the equation:
\[
2 <s>_{\varphi_1} \left(2 <s+1>_{\varphi_2} 2\right) = 2 <s+1>_{\varphi_3} 3\\
\]
Then we only have one free variable, either \(\varphi_1\) or \(\varphi_3\), as they are bound to each other by a weird logarithmic equation. I wanted to set \(2 <s> 2 = 4\), because I wanted to reduce the equation, assume that \(\varphi_2 = 0\).
I know what you're saying about the 0th rank not working, but I'm not concerning myself with successorship at all. I only care about making a holomorphic solution for \(0 \le \Re(s) \le 2\). My presumption is that along \((-\infty,0)\), we will have a branch cut in \(s\). So at best, we'll have holomorphy on \(\mathbb{C}/(-\infty,0]\). And so the discontinuity at the 0th rank is perfectly fine in my terms.
As to whether this equation works or not, the monodromy theorem will take care of all the trouble.
\[
2<s>_{\varphi_1}(2 <s+1>_{\varphi_2} 2) = 2 <s+1>_{\varphi_3} 3
\]
The implicit function theorem, ensures these three functions always exist in \(s\). The derivatives are non-zero, and we have existence of points. This means we have a bunch of open balls in \(s\), and in \(\varphi_i\). This means we can paste together these open balls, using the monodromy theorem. This gives multiple functions \(\boldsymbol{\varphi}(s) = (\varphi_1(s),\varphi_2(s),\varphi_3(s))\), for \(0 \le \Re(s) \le 1\). These functions satisfy:
\[
2<s>_{\varphi_1(s)}(2 <s+1>_{\varphi_2(s)} 2) = 2 <s+1>_{\varphi_3(s)} 3
\]
There are multiple functions which satisfy this, I can program arbitrary ones--but the trouble is which one is the right one such we can extend globally (switch \(2\) for a variable \(y\), and \(3\) for \(y+1\)). This is the trouble, where now we add \(\boldsymbol{\varphi}(y,s)\)--the monodromy theorem, ensures we can glue together all the open balls in \(y\) together, to make one function (though there are still many candidate functions).
So now, we have a whole bunch of candidate functions \(\boldsymbol{\varphi}(y,s)\). It's actually pretty simple to program in these equations. I made a couple of tests with some dummy versions. For example \(y=2\), set \(\varphi_2 = 0\). And set \(\varphi_1 = 0.1*s(1-s)\), this will perfectly satisfy the equation. Won't be generalizable, there's only one that is generalizable (uniqueness is part of monodromy theorem once we add initial conditions).
We want the one such that \(\boldsymbol{\varphi}(y,0) = \boldsymbol{\varphi}(y,1) = 0\). And now, we are asking that the individual terms \(\varphi_1(y,s),\varphi_2(y,s),\varphi_3(y,s)\) satisfy relations to each other.
\[
\begin{align}
\varphi_2(2 <s+1>_{\varphi_2(y,s)} y, s-1) &= \varphi_1(y,s)\\
\varphi_3(y,s) &= \varphi_2(y+1,s)\\
\end{align}
\]
This is absolutely doable... We are making 2 restrictions on a surface in complex dimension 2--while still allowing \(y,s\) to move and perturb the surface. Yes, I know. It hurts my fucking head too.
The key is to ensure that \(\varphi_1(y,1)=\varphi_2(2 <2>_{\varphi_2(y,1)} y, 0)\) as a taylor series, the rest will take care of itself. They both equal zero, but you have to make sure the Taylor series are exactly the same.
This is essentially just asking that:
\[
\frac{d^k}{ds^k}\Big{|}_{s=1} \varphi_2(2 <s+1>_{\varphi_2(y,s)} y, s-1) = \frac{d^k}{ds^k}\Big{|}_{s=1}\varphi_1(y,s)\\
\]
From here then, if we call the singular function \(\phi(y,s+1) = \varphi_2(y,s)\), and define:
\[
2<s+1>_{\phi(y,s+1)} y = 2<s+1> y\\
\]
And similarly can define a \(\phi(y,s)\) for \(0 \le \Re(s) \le 1\), using the relations between \(\varphi_{1,2,3}\).
As to your question that we would have a discontinuity, I normally would agree with this much stuff going on. But I'm confident this doesn't happen, because the derivatives in either \(\varphi\) are non-zero. And additionally, we have points everywhere, so there's always existence of points. So we won't have the trouble of a branching singularity, something like \(\exp(y) =0\). This is thankful to the fact that \(||\boldsymbol{\varphi}|| < \delta\), it's a very small perturbation, so it doesn't really affect us too much.
All of this is making my head swim. I apologize for inconsistencies in the notation, I'm still trying to figure this out. My head just keeps going in circles. I'm confident we have an implicit solution, but I'm not sure about how to construct/program it. I'm certain I'm not there yet, but I'm getting close to getting there. This will probably be my project for this summer. See if I can get it going more straightforwardly.
Another way to think about it, is that there are two functional equations we are requesting of the functional equation \(\phi\). One relates \(\varphi_2\) and \(\varphi_3\), and the other \(\varphi_1\) and \(\varphi_2\). We have a surface in two complex dimensions, so two restrictions reduces us to a point. Which is what we want, as it's a single value for the given \(s,y\). The two equations are simple,
If you shift \(\varphi_2\) forward from \(y \mapsto y+1\) you get \(\varphi_3\) without this shift in \(y\).
If you nest \(\varphi_2\) twice in the specific manner I wrote, you get \(\varphi_1\).
This is no different than a matrix equation in \(3\) variables, that belong to a surface in \(\mathbb{C}^2\), and making two restrictions. As long as the points exist, as long as the equation is non singular, you're golden. It's precisely the same thing. It's just god awful more difficult than matrices. lmao..
As to set this home, remember, we can describe a surface \( y= x\), the moment you add one restriction: \(y = x^2\), we know the distinct points are \((x,y) = (0,0)\) and \((x,y) = 1\). It's the exact same principle here. Now, move the surface by \(s\):
\[
y = s+x\\
\]
Constructs an evolving surface when only talking about \(x\) or \(y\). When we ask that \(y = x^2\), we are asking for the quadratic formula, such that \(s+x -x^2 = 0\), this gives \((x(s),y(s)\). These are unique, once you add initial conditions. This is the same for higher order polynomial. There may be 10 solutions, but they're unique up to initial conditions.
Now \(y(s)\) is reduced to \(x(s)\); so the two variables become one. And then we evolve the surface over time, where time is really complex dimensional time in \(s\). There may be cuts. But surprise surprise, since there's always existence of points (there are always x's and y's and s's to satisfy this equation regardless of the other values), we always have a number at least. To confirm analycity just confirm that:
\[
\frac{d}{dx} \left(s+x-x^2\right) = 1-2x \neq 0\\
\]
This certainly works if we just focus on \(x\neq 1/2\). Since it's a quadractic equation, we have "two solutions" about this branching problem. They are each unique upto initial conditions though. How do we prove this? Well you can take the unsophisticated route, and just use the quadratic formula. Or you can use the monodromy theorem... Which covers everything!!!!
This is sort of the template of my argument. It's just taylor series, but, I know how to talk about analytic functions like polynomials. I'm not the best at it, but we're nearly there Mphlee. So damn close.
I want you to know I really appreciate you engaging. I tend to forget subtle details, or misexplain, assume you know, without someone engaging me and reminding myself what I forgot to explain. I really appreciate you asking questions.
I also don't want you to think I'm writing off your equations above. They are absolutely beautiful equations. And I know what you mean, by they are a priori truths on any extension. The trouble is, they don't help too much from the computational angle. To even attempt at programming \(\Re(s)>3\) is a night mare. And that's dependent on a solution for \(0 \le \Re(s) \le 2\).
I understand, that we need an effective manner to describe these outer composition identities. I think, solving these equations will be far more fruitful, and the above equations will fall in place.
I will also keep in mind, that \(\varphi(2,2,s) \neq 0\) necessarily. I will explore that \(\varphi(2,2,s) = 0\) is constant, and I will explore non-constant. To readdress the 0-rank discontinuity. Though I'm expecting successorship to be an essential singularity in 3 variables. Which I have no idea how crazy it can get... Again, why I choose the notation \(<s>\), this only for a bennet modifier. We'll get to higher/lower ranks when we get there. Let's just stick to gluing bennet together...
HOLY FUCKING SHIT! You're right \(\varphi(2,2,s) \neq 0\)!!!!!!
The only solution must handle the border solutions holomorphically. I was secretly dreading looking at \(x<s> e\), because \(e\) means we hit the boundary value \(\eta = e^{1/e}\). And this can cause a branching problem. THIS IS WHERE WE ASSIGN OUR \(\varphi_2 = 0\). This means, everywhere at \(e\):
\[
x<s> e = \exp^{\circ s}_{\eta}(\log_\eta^{\circ s}(x) + e)\\
\]
THAT'S WHERE THE \(\varphi\) IS CONSTANT!!!!! NOW MY GRAPHS ARE CALM AS HELL MPHLEE!!!!! YASSSSSSS!!! SO FUCKING PUMPED!!!!!!! THE BOUNDARY OF SHELL-THRON WE ASSIGN EXACT VALUES TO BENNET. HOLY FUCK!!!! THANK YOU MPHLEE!!!
veni, vidi, vici, thanks to my bro MphLee... So instead of:
\[
2<s> 2 = 4\\
\]
We are instead saying that:
\[
x<s> e = x<s>_0 e\\
\]
This is far more intrinsically tied to the eta tetration and the cheta solution than I originally imagined. YES!!!!!!!!!!!!! SO PUMPED! I SEE HOW TO SOLVE THESE EQUATIONS!!!!

