As I've begun to reframe the discussion around semi-operators, I've chosen to choose the language of Abel functions, which allows us to reduce a \(3\) variable equation into \(2\). But it becomes a bit of a headache, and I've hit a kind of Stop sign, telling me: I need a creative leap to get passed this hurtle.
So, to begin, we can reintroduce everything. We start by writing the modified bennet operators:
\[
x[s]y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]
Here there are a few things to take note. I am restricting \(\Re(y) > 1\), and I am restricting \(x > e\). The variable \(s\) is also restricted to belong in \([0,2]\). Each of these iterations are about the repelling iteration, as well. So if \(y = 2\) then \(\exp_{\sqrt{2}}^{\circ s}(u)\) is the Schroder iteration about the fixed point \(4\). This additionally means, if we were to take \(y = e\), then we are not performing the normal \(\eta\) tetration, we are performing the repelling iteration, which is described as \(\exp_\eta^{\circ s}(u)\) for \(u > e\).
This function is always analytic, and has the benefit of satisfying:
\[
\begin{align}
x[0]y &= x+y\\
x[1]y &= x\cdot y\\
x[2]y &= x^y\\
\end{align}
\]
Additionally this function nearly satisfies the Goodstein equation:
\[
x[s]\left(x[s+1] y\right) \approx x[s+1] (y+1)\\
\]
We then enter into introducing a \(\varphi\) parameter. This parameter is intended to correct the modified bennet operators to satisfy the Goodstein equation.
\[
x[s]_\varphi y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right)\\
\]
Previous attempts at solving for the function \(\varphi(s,x,y)\) include using the implicit function, where the surface of values:
\[
F(\varphi_1,\varphi_2,\varphi_3) = x[s]_{\varphi_1}\left(x[s+1]_{\varphi_2} y\right) - x[s+1]_{\varphi_3} (y+1) = 0\\
\]
Have a very planar structure, and that shows that we at least have a rough existence of the solution.
I have switched gears now, instead of finding an implicit curve on this evolving surface, to solving an Abel equation hidden in here. We start off then, by defining the inverse function to these operators. These can always be found because \(x[s]y\) has monotone growth (easily checked by observing non-zero derivative, which is easily checked on the real line). To begin, we're going to fix \(x\), it doesn't really move in the Goodstein equation, so it can be considered as an initial point, and is therefore largely irrelevant, just so long as \(x > e\).
So let:
\[
\alpha(s,x[s]y) = y = x[s]\alpha(s,y)\\
\]
And to let things get a little difficult, let's let:
\[
\alpha_{\varphi}(s,x[s]_\varphi y ) = y = x[s]_\varphi\alpha_\varphi(s,y)\\
\]
And now our problem becomes very different. We are now asking for a function \(\varphi\) such that:
\[
\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\
\]
And here is where things become untenable. This is precisely where everything starts to break down. But momentarily, follow me. If we find the solution to this, then we call \(\varphi(s,y)\) the various values which allow this solution. This function by construction, will now satisfy:
\[
\varphi(s,x[s]_{\varphi(s,y)} y) = \varphi(s,y)\\
\]
Which means it is idempotent. This is largely the goal from here, to construct an idempotent function. Because...
\[
x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + 1\right)\\
\]
And from here the orbits are satisfied:
\[
x[s]_{\varphi(s,y)} x[s]_{\varphi(s,y)} ...\text{n times}...x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + n\right)
\]
And now we can affirm that these operators satisfy the Goodstein equation.
The trouble?
This formula diverges, or rather, has no solution for \(s > 0.2\). Everything works fine for the interval \(s \in [0,0.2]\), but about here everything begins to diverge. Initially, I thought it was a problem with my code, but no, this problem is something a bit more intrinsic in this manner of solution. Namely:
\[
\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\
\]
Has no solutions for \(s > 0.2\).
Thus enters the more difficult problem... which is probably where I should've started earlier. But, I'm here now.
\[
\alpha_{\varphi_2}(s+1,x[s]_{\varphi_1} y)= \alpha_{\varphi_2}(s+1,y) + 1\\
\]
This describes a line in \(\mathbb{R}\), and there are always solutions for it, though it is required to let \(y\) grow. The reason being that:
\[
\alpha(s+1,x[s]y) - \alpha(s+1,y) - 1 = o(y^{\epsilon})\\
\]
For all \(\epsilon > 0\). This essentially means that the modified Bennet operators are so close to satisfying Goodstein's equation, that the Abel equation is satisfied upto about \(O(\log(y))\). And thereby, moving \(\varphi_1\) and \(\varphi_2\) around, always ensures there is at least a point which the above equation is satisfied. In fact, there's a closed form for it, I won't write it because it's ugly as hell, but it's always solvable using the log rules:
\[
\begin{align}
\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &= \log_{y^{1/y}}^{\circ s}(x) + y + \varphi\\
\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &= \log_{y^{1/y}}^{\circ s}(x)y e^{\varphi}\\
\end{align}
\]
So then our problem becomes something new, we can solve for the function \(\varphi_1\) as a function of \(\varphi_2\) such that:
\[
x [s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s,y) + 1\right)\\
\]
But in order for this to work, we still need \(\varphi_2\) to be idempotent. This should follow naturally because it satisfies the Abel equation, by which we have the orbits:
\[
x[s]_{\varphi_1} x[s]_{\varphi_1} ...\text{n times}...x[s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s+1,y) + n\right)
\]
The trouble now?
How to make sure this is a well defined operator. As a function solution it works, but I'm not sure if it constructs an operator. Analycity isn't a problem. But we, essentially, need to satisfy the following equation:
\[
\begin{align}
\varphi(s,y) = \varphi_1(s,y)\,\,\text{for}\,\,s\in[0,1]
\varphi(s,y) = \text{some new formula I can't wrap my head around, for}\,\,s \in [1,2]\\
\end{align}
\]
I've sort of confuddled myself into a circle here. And I'm mostly just writing this out to see if something obvious sticks out to me. But still, this has become endlessly frustrating...
Nonetheless! We're getting closer by the minute to unlocking:
\[
x \langle s \rangle y = x[s]_{\varphi(s,x,y)} y\\
\]
Such that:
\[
x \langle s \rangle \left(x \langle s+1 \rangle y\right) = x \langle s+1 \rangle (y+1)\\
\]
Here is a graph of \(3[1.5]y\) over a pretty large domain, about \(20 > \Re(y) > 0\) and \(|\Im(y)| < 10\), the artifacts are code artifacts, because I haven't found a way to let my code pass the Shell-Thron boundary smoothly
Here is a graph of \(3 [1.9] y\) where you can see it almost has the periodic structure of \(3^y\):
I'll update with a plot of \(\alpha(1.9,y)\) which is almost logarithmic, I'm making a large complex plot, but the code is certainly suboptimal so that'll probably take all night, lol.
Here is the real plot!
I'll post the complex when it compiles.
So, to begin, we can reintroduce everything. We start by writing the modified bennet operators:
\[
x[s]y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y\right)\\
\]
Here there are a few things to take note. I am restricting \(\Re(y) > 1\), and I am restricting \(x > e\). The variable \(s\) is also restricted to belong in \([0,2]\). Each of these iterations are about the repelling iteration, as well. So if \(y = 2\) then \(\exp_{\sqrt{2}}^{\circ s}(u)\) is the Schroder iteration about the fixed point \(4\). This additionally means, if we were to take \(y = e\), then we are not performing the normal \(\eta\) tetration, we are performing the repelling iteration, which is described as \(\exp_\eta^{\circ s}(u)\) for \(u > e\).
This function is always analytic, and has the benefit of satisfying:
\[
\begin{align}
x[0]y &= x+y\\
x[1]y &= x\cdot y\\
x[2]y &= x^y\\
\end{align}
\]
Additionally this function nearly satisfies the Goodstein equation:
\[
x[s]\left(x[s+1] y\right) \approx x[s+1] (y+1)\\
\]
We then enter into introducing a \(\varphi\) parameter. This parameter is intended to correct the modified bennet operators to satisfy the Goodstein equation.
\[
x[s]_\varphi y = \exp_{y^{1/y}}^{\circ s}\left(\log^{\circ s}_{y^{1/y}}(x) + y + \varphi\right)\\
\]
Previous attempts at solving for the function \(\varphi(s,x,y)\) include using the implicit function, where the surface of values:
\[
F(\varphi_1,\varphi_2,\varphi_3) = x[s]_{\varphi_1}\left(x[s+1]_{\varphi_2} y\right) - x[s+1]_{\varphi_3} (y+1) = 0\\
\]
Have a very planar structure, and that shows that we at least have a rough existence of the solution.
I have switched gears now, instead of finding an implicit curve on this evolving surface, to solving an Abel equation hidden in here. We start off then, by defining the inverse function to these operators. These can always be found because \(x[s]y\) has monotone growth (easily checked by observing non-zero derivative, which is easily checked on the real line). To begin, we're going to fix \(x\), it doesn't really move in the Goodstein equation, so it can be considered as an initial point, and is therefore largely irrelevant, just so long as \(x > e\).
So let:
\[
\alpha(s,x[s]y) = y = x[s]\alpha(s,y)\\
\]
And to let things get a little difficult, let's let:
\[
\alpha_{\varphi}(s,x[s]_\varphi y ) = y = x[s]_\varphi\alpha_\varphi(s,y)\\
\]
And now our problem becomes very different. We are now asking for a function \(\varphi\) such that:
\[
\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\
\]
And here is where things become untenable. This is precisely where everything starts to break down. But momentarily, follow me. If we find the solution to this, then we call \(\varphi(s,y)\) the various values which allow this solution. This function by construction, will now satisfy:
\[
\varphi(s,x[s]_{\varphi(s,y)} y) = \varphi(s,y)\\
\]
Which means it is idempotent. This is largely the goal from here, to construct an idempotent function. Because...
\[
x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + 1\right)\\
\]
And from here the orbits are satisfied:
\[
x[s]_{\varphi(s,y)} x[s]_{\varphi(s,y)} ...\text{n times}...x[s]_{\varphi(s,y)} y = x [s+1]_{\varphi(s,y)} \left(\alpha_{\varphi(s,y)}(s+1,y) + n\right)
\]
And now we can affirm that these operators satisfy the Goodstein equation.
The trouble?
This formula diverges, or rather, has no solution for \(s > 0.2\). Everything works fine for the interval \(s \in [0,0.2]\), but about here everything begins to diverge. Initially, I thought it was a problem with my code, but no, this problem is something a bit more intrinsic in this manner of solution. Namely:
\[
\alpha_{\varphi}(s+1,x[s]_\varphi y)= \alpha_\varphi(s+1,y) + 1\\
\]
Has no solutions for \(s > 0.2\).
Thus enters the more difficult problem... which is probably where I should've started earlier. But, I'm here now.
\[
\alpha_{\varphi_2}(s+1,x[s]_{\varphi_1} y)= \alpha_{\varphi_2}(s+1,y) + 1\\
\]
This describes a line in \(\mathbb{R}\), and there are always solutions for it, though it is required to let \(y\) grow. The reason being that:
\[
\alpha(s+1,x[s]y) - \alpha(s+1,y) - 1 = o(y^{\epsilon})\\
\]
For all \(\epsilon > 0\). This essentially means that the modified Bennet operators are so close to satisfying Goodstein's equation, that the Abel equation is satisfied upto about \(O(\log(y))\). And thereby, moving \(\varphi_1\) and \(\varphi_2\) around, always ensures there is at least a point which the above equation is satisfied. In fact, there's a closed form for it, I won't write it because it's ugly as hell, but it's always solvable using the log rules:
\[
\begin{align}
\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &= \log_{y^{1/y}}^{\circ s}(x) + y + \varphi\\
\log_{y^{1/y}}^{\circ s} \left(x [s]_\varphi y\right) &= \log_{y^{1/y}}^{\circ s}(x)y e^{\varphi}\\
\end{align}
\]
So then our problem becomes something new, we can solve for the function \(\varphi_1\) as a function of \(\varphi_2\) such that:
\[
x [s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s,y) + 1\right)\\
\]
But in order for this to work, we still need \(\varphi_2\) to be idempotent. This should follow naturally because it satisfies the Abel equation, by which we have the orbits:
\[
x[s]_{\varphi_1} x[s]_{\varphi_1} ...\text{n times}...x[s]_{\varphi_1} y = x [s+1]_{\varphi_2} \left(\alpha_{\varphi_2}(s+1,y) + n\right)
\]
The trouble now?
How to make sure this is a well defined operator. As a function solution it works, but I'm not sure if it constructs an operator. Analycity isn't a problem. But we, essentially, need to satisfy the following equation:
\[
\begin{align}
\varphi(s,y) = \varphi_1(s,y)\,\,\text{for}\,\,s\in[0,1]
\varphi(s,y) = \text{some new formula I can't wrap my head around, for}\,\,s \in [1,2]\\
\end{align}
\]
I've sort of confuddled myself into a circle here. And I'm mostly just writing this out to see if something obvious sticks out to me. But still, this has become endlessly frustrating...
Nonetheless! We're getting closer by the minute to unlocking:
\[
x \langle s \rangle y = x[s]_{\varphi(s,x,y)} y\\
\]
Such that:
\[
x \langle s \rangle \left(x \langle s+1 \rangle y\right) = x \langle s+1 \rangle (y+1)\\
\]
Here is a graph of \(3[1.5]y\) over a pretty large domain, about \(20 > \Re(y) > 0\) and \(|\Im(y)| < 10\), the artifacts are code artifacts, because I haven't found a way to let my code pass the Shell-Thron boundary smoothly
Here is a graph of \(3 [1.9] y\) where you can see it almost has the periodic structure of \(3^y\):
I'll update with a plot of \(\alpha(1.9,y)\) which is almost logarithmic, I'm making a large complex plot, but the code is certainly suboptimal so that'll probably take all night, lol.
Here is the real plot!
I'll post the complex when it compiles.